The Global Experimentation for Future Internet (GEFI) community connects researchers and research sponsors in the EU, US, Japan, Korea, and Brazil to advance international collaboration for experimental research in future networks. GEFI 2019 is the third workshop in the GEFI series, which expands on several previous bilateral and regional international collaborations.
NICT provides testbed facilities such as JGN, JOSE, RISE, and StarBED to promote research and development of information and communications technology. Recently, a trend to open networking has been accelerating, and many projects have been proposing innovative networking mechanisms utilizing programmable networking capabilities. Especially, data-plane programming with the P4 programing language attracts much attention of researchers and developers because it enables more flexible and stateful packet processing. Thus, we are considering supporting network programmability with P4 in our testbed environments and providing a P4 testbed in the future.
To realize the P4 testbed, we have a stepwise plan. As the first step, we are considering using the P4 behavioral model (bmv2) that works as P4 software switches. Among the NICT testbeds, RISE is an SDN/OpenFlow testbed and it already provides network environments with software switches as well as those with hardware switches. Therefore, we will use this software-based RISE environment and replace the software switches with bmv2 instances to create a P4 testbed environment.
As the second step, we will introduce hardware P4-enabled switches. The challenges we face here is how to achieve multi-tenancy. RISE provides multi-tenancy with hardware switches by using the virtualization (slicing) mechanism implemented in those switches. However, at the time of writing, we have not yet found such equipment that supports both P4 programmability and virtualization.
In the workshop, I will talk about our plan of our P4 testbed, discuss use cases of P4, and look for opportunity for collaboration on the P4 testbed development.
A growing number of scientific fields require the ability to analyze data in near real-time, so that results from one experiment can guide selection of the next—or even influence the course of a single experiment. The experiments are often tightly scheduled, with timing driven by factors ranging from the physical processes involved in an experiment to the travel schedules of on-site researchers. With improvements in the sensor and detector technologies at experimental facilities (e.g., synchrotron light sources and neutron sources), data produced at these facilities significantly exceed their own local processing capabilities. Thus, the data needs to be moved to remote compute facilities both within and outside a country (or continent) as the users of these facilities often span diverse geographic locations.
The computing and network resources must be available at a specific time, for a specific period. Ondemand network bandwidth, though provided by backbone research and education networks such as ESnet and Internet2, is not easy to get end-to-end in an automated fashion. Even though compute resources can be obtained on-demand (at least in some institutions), those resources are not typically connected to the wide-area network (WAN). The typical model is that the data coming from the WAN goes into the parallel file system via the dedicated data transfer nodes (DTNs) and compute nodes access the data from the parallel file system. This model does not work well for near real-time analysis of the data streams coming from an experiment or simulation. We need international (and intercontinental) testbeds to evaluate solutions to enable these emerging science workflows.
(download PDF for full text)
Modern data-centric applications are among the major
drives for next generation Internet and network infrastructure
innovation. These applications, often founded in broad societal
challenges such as overpopulation and diminishing natural
resources, cut across many different scientific domains and
require collection, transfer, and processing capabilities on data
from broad range of sources.
These applications can only be effectively enabled however
in the presence of a supporting research infrastructure, which
should provide the necessary tools for searching, accessing and
integrating data and software for different workflows within
scientists research activities. Recent paradigm shift towards
data centric approaches further motivated the development
of advanced network and computing technologies, e.g., SDN
(software defined networking), ICN (Information-Centric Networking)
and 5G, as well as the Cloud technologies in Edge
Cloud and machine learning (ML). In the following, we use
our recent research experience in supporting environmental
research as an example to help lay out our collaborative
research agenda.
(download PDF for full text)
With the ever growing complexity of networks, researchers have to rely on test-beds to be able to fully assess the quality of their propositions. In the meanwhile, Mininet offers a simple yet powerful API, the goldilocks of network emulators. We advocate that the Mininet API is the right level of abstraction for network experiments. Unfortunately it is designed to be run on a single machine. To address this issue we developed a distributed version of Mininet – Distrinet – that can be used to perform network experiments in any Linux-based testbeds, either public or private. To properly use testbed resources and avoid over- commitment that would lead to inaccurate results, Distrinet uses optimization techniques that determine how to orchestrate the experiments within the testbed. Its programmatic approach, its ability to work on various testbeds, and its optimal management of resources make Distrinet a key element to reproducible research.
Network Function Virtualization (NFV), coupled with Software Defined Networking (SDN),
promises to revolutionize networking by allowing network operators to dynamically modify and
manage networks. Operators can create, update, remove or scale out/in network functions (NFs)
on demand, construct a sequence of NFs to form a so-called service function chain (SFC) and
steer traffic through it to meet various policy and service requirements. In the emerging 5G
technologies – besides innovations in radio technologies such as 5G new radio (NR), NFV will be
a key enabling technology underpinning the envisioned 5G “Cloud RANs” (radio access
networks), MECs (mobile edge clouds) and packet core networks for support of network slicing
and diverse services ranging from enhanced mobile broadband (eMBB) to massive machine type
communications (mMTC) and ultra-reliable low latency communications (URLLC). For
example, upon a request for a service (e.g., from a mobile user or a machine, say, an autonomous
vehicle or an industrial controller), a SFC will be dynamically constructed using a series of
virtualized network functions (vNFs) such as firewalls, mobility managers, network address
translators, traffic shapers and so forth that are deployed on demand at appropriate locations
within a (dynamic) network slice to meet the desired service requirements.
(download PDF for full text)
Unmanned aerial vehicles (UAV) or drone systems equipped with cameras are extensively used in different surveillance scenarios and often require real-time control and high-quality video transmission. However, unstable network situations and various transport protocols may result in impairments during video streaming, which in turn negatively impacts user’s quality of experience (QoE). In this position statement, we present dynamic edge/cloud computation offloading and control framework requirements to handle video processing from IoT devices in the field for public safety and precision agriculture use cases. The framework features image impairment detection under various available network bandwidth conditions and adapts transport protocols (e.g., QUIC) for air-to-ground, air-to-air and ground-to-ground data transfers. We present results from a preliminary implementation of our framework viz., DyCOCo in a testbed setup on the GENI infrastructure. Our demo results show that our DyCOCo framework approach can efficiently choose the suitable networking protocols and orchestrate both the camera control on the drone, and the computation offloading of the video analytics over limited edge computing/networking resources.
Increased ubiquity of sensing via smart devices and IoT devices in smart homes and smart healthcare domains, for example, has caused a surge in sensitive and personal data generation and use from browsing habits to purchasing patterns to real-time location to personal health information. Unfortunately, our ability to collect and process data has overwhelmed our ability to protect that information in which concerns over privacy, trust, and security are becoming increasingly important as different stakeholders attempt to take advantage of such rich data resources. In addition, different applications on these devices result in diverse traffic characteristics that require different performance levels of reliability, loss, and latency. Therefore, it becomes essential to have greater visibility and control over the traffic generated from smart and IoT devices in order to guarantee an optimized performance of smart and IoT applications as well as high quality of experience to users. In this research, we aim to design and develop ExtremeDataHub platform an open-source, flexible, and programmable networked edge device that collates and mediates access to our sensitive and personal data, under the data subjects control as well as to cope with various characteristics and requirements of smart and IoT applications that access this data in order to provide better performance and quality of experience to users.
(Please see PDF attachment for a nicely formatted 2 page pdf with figures.)
FABRIC and International Testbed Collaboration
Paul Ruth, pruth@renci.org, RENCI - UNC Chapel Hill (author and attendee)
This document is a response to the GEFI 2019 call for position statements. This statement includes two major contributions that will be interesting to the GEFI community. First, is an announcement of the $20 Million NSF networking testbed called FABRIC. Second, is a description of a new collaboration between Chameleon, ExoGENI, and CityLab (Antwerp). Paul Ruth desires to share both topics and is willing to help organize a session for this purpose if needed. Announcing FABRIC The NSF on September 17, 2019 announced a $20 Million collaborative project, led by RENCI - UNC Chapel Hill, to create a platform for testing novel internet architectures that could enable a faster, more secure Internet. FABRIC will provide a nationwide testbed for reimagining how data can be stored, computed and moved through shared infrastructure. FABRIC will allow scientists to explore what a new Internet could look like at scale and will help determine the internet architecture of the future.
A series of government-funded programs from the 1960s through the 1980s established the computer networking architectures that formed the basis for today’s internet. FABRIC will help test out new network designs that could overcome current bottlenecks and continue to extend the Internet’s broad benefits for science and society. FABRIC will explore the balance between the amount of information a network maintains, the network’s ability to process information, and its scalability, performance and security.
The core FABRIC team includes RENCI, the University of Kentucky, the Department of Energy’s Energy Sciences Network (ESnet), Clemson University, and the Illinois Institute of Technology. Contributors from the University of Kentucky and ESnet will be instrumental in designing and deploying the platform’s hardware and developing new software. Clemson and Illinois Institute of Technology researchers will work with a wide variety of user communities—including those focused on security, distributed architectures, scientific applications and data transfer protocols—to ensure FABRIC can serve their needs. In addition, researchers from many other universities will help test the platform and integrate their computing infrastructure and scientific instruments into FABRIC.
The construction phase of the project is expected to last four years, with the first year dedicated to software development, finalizing technical designs, and prototyping. Subsequent years will focus on rolling out the platform’s hardware to participating sites across the nation and connecting it to major national computing facilities. Ultimately, national and international experimenter communities will be able to attach new instruments or hardware resources to FABRIC’s uniquely extensible design, allowing the infrastructure to grow and adapt to changing research needs over time. Currently the FABRIC team is looking to build a community of experimenters and facility partners to provide insight into the testbed design through community workshops starting early 2020.
Antwerp CityLab (imec/UA) - Chameleon/ExoGENI Collaboration
In July 2019, Paul Ruth traveled to Antwerp and Ghent, Belgium, to kick off a research collaboration with several members of Prof. Johann Marquez-Barja’s research group that operate the Antwerp CityLab as part of IDLab/IMEC. The intent of the meetings were to foster an emerging collaboration between CityLab, ExoGENI, and NSF Cloud Chameleon with the goal of supporting global networking experiments that span all of these testbeds.
The CityLab in Antwerp is a great place to deploy Smart City experiments requiring low-latency local edge computing capabilities. However, it has limited access to regional private clouds and large remote clouds. An emerging collaboration between CityLab, ExoGENI, and NSFCloud Chameleon aims to enable tiered experiments that use regional private clouds (ExoGENI at University of Amsterdam) and large remote clouds (NSF Cloud Chameleon). The goal of this collaboration is to enable experiments spanning the three testbed as seen in the figure.
The meetings began with presentations to Paul Ruth by IDLab researchers Jeroen Famaey and Johann Marquez-Barja about the roles of IMEC-IDLab and the many different testbeds that IDLab operates (including Antwerp’s CityLab). The remainder of the day was focused on discussions about how to enable experiments spanning the three testbeds. The discussions resulted in a much better understanding of the possibilities and limitations of enabling these experiments. A second day of meetings was in Ghent, Belgium and was hosted by Brecht Vermeulen at the IDlab-IMEC facilities in Ghent. Brecht is responsible for Fed4Fire which is needed to “stitch” ExoGENI circuit to CityLab. The meetings in Ghent included very low-level discussions about how the stitched circuit would be implemented. One unexpected outcome of the meeting was that we now plan to use a generic way to stitch ExoGENI to Fed4Fire. This more generic technique will enable stitching between ExoGENI and several other Fed4Fire testbeds including CityLab and Grid5000. The resulting plan is currently being deployed.
We plan to continue deploying the mechanisms required for experiments spanning Chameleon, ExoGENI, and Fed4Fire testbeds. We hope to present initial experiments at the 2019 GEFI workshop and perform a more robust experiment that with result in a published paper. As FABRIC is developed this initial collaboration will spur international collaboration with NSF’s newest networking testbed.
With its national and international research partners, the International Center for Advanced Internet Research (iCAIR) at Northwestern University designs, develops, implements, and operates large scale, including world-wide, computer science testbeds. Generally, with its research partners, iCAIR operates between 25 and 30 national, international, and local testbeds. The majority have been designed and implemented as network research testbeds. However, several are distributed compute fabrics including the NSFCloud Chameleon, several computational science clouds, and computational science Grid facilities. iCAIR policies, procedures and technologies strongly support international collaboration and testbed federation.
(download PDF for full text)
5G-DIVE targets end-to-end 5G trials aimed at proving the technical merits and business value proposition of 5G technologies in two vertical pilots, namely (i) Industry 4.0 and (ii) Autonomous Drone Scout. Its design is built around two main pillars, namely (1) end-to-end 5G connectivity including 5G New Radio, Crosshaul transport and 5G Core, and (2) distributed edge and fog computing integrating intelligence located closely to the user to achieve optimized performance, improving significantly the business value proposition of 5G in each targeted vertical application.
https://www.facebook.com/pg/duxtabernaurbana/about/?ref=page_internal
+351 239 093 723
In this position paper, CTTC and University of Washington provide a status update regarding new protocol stacks for end-to-end and multi-RAT scenario simulations in the open source network simulator ns-3 (www.nsnam.org). Specifically, the recent advances enable network performance evaluation research in the emergent areas of 5G NR-U and IEEE802.11ax coexistence in unlicensed band. The work relies on previous open contributions of the two partners in the area of NR and IEEE802.11, respectively, among a long track of successful collaborations in the areas of LTE-LAA, and LTE for Public Safety.
In this position paper, we focus on the NITOS testbed and the experimentally driven 5G activities around the established experimentation ecosystem it provides. NITOS is a highly heterogeneous testbed located in the premises of University of Thessaly, Greece. The testbed provides remote access to experimenters from around the globe, allowing repeatable experimentation with cutting edge resources. In this position paper, we cover some of the main contributions with frameworks for experimentation in Cloud-based Radio Access Networks, Multi-access Edge Computing, Spectrum Coordination, as well as frameworks for orchestrating different software functions as VNFs.
Computational science today depends on complex, data-intensive applications operating on datasets from
a variety of scientific instruments. A major challenge is the integration of data into the scientist's workflow.
Recent advances in dynamic, networked cloud resources provide the building blocks to construct
reconfigurable, end-to-end infrastructure that can increase scientific productivity. However, applications
have not adequately taken advantage of these advanced capabilities. In the context of the DyNamo [4]
project funded under the NSF Campus CyberInfrastructure program, we have developed a novel networkcentric
platform, Mobius [7], which enables high-performance, adaptive data flows and coordinated access
to distributed multi-cloud resources (cloud research testbeds like ExoGENI [1], Chameleon [2], XSEDE
JetStream [3], etc.), and data repositories for atmospheric scientists.
(download PDF for full text)
Cloud testbeds are critical for enabling research into new cloud technologies - research that
requires experiments which potentially change the operation of the cloud itself. Several such
testbeds have been created in recent past (e.g., Chameleon, CloudLab, etc.) with the goal to
support the CISE systems research community. It has been shown that these testbeds are very
popular and heavily used by the research community [1]. Testbed utilization often reaches
100%, especially ahead of deadlines for major systems conference, while there are also periods
of modest (<40%) testbed usage.
(download PDF for full text)
Computer Science experimental testbeds allow investigators to explore a broad
range of different state-of-the-art hardware options, assess scalability of their
systems, and provide conditions that allow deep reconfigurability and isolation so
that one user does not impact the experiments of another. Although the primary
purpose of those testbeds is to provide resources to users who would not be able to
satisfy their experimental needs otherwise, an important side-effect is that multiple
users and user groups have access to the same resources, that are compatible with
the same experimental artifacts, such as appliances/images or orchestration
templates. This creates conditions which allow users to share experiments and
replicate each other’s work more easily and creates an opportunity to foster good
experimental practices as well as create a sharing ecosystem.
(download PDF for full text)
Reproducibilification, i.e., making experiments reproducible, is the
ultimate goal for successful scientific experiments. In this work, we
identify key challenges for the design of reproducible network experiments.
We present our approach for reproducible network research
which enforces an experiment workflow leading to inherently replicable
network experiments. Our approach realized in our testbed
infrastructure combines high-precision measurement tools, full automation,
and support for publishing experiment scripts and results.
We further present ongoing work, including extending high precision
traffic generation and measurement capabilities for 100G Ethernet.
Future plans involve the creation of a multi-site wireless testbed,
which connects our testbed infrastructure with different remote
testbeds, thereby creating a federated testbed. This federated testbed
can be used for scenarios combining 5G Radio Access Network infrastructure
with high-performance backbone infrastructure to investigate
low-latency communication and edge computing use cases.