Proposal Cover Page

Title of Proposed Project

Active Internet Measurements for ESnet (AIME)

DoE Office of Science Notice 01-01:

Continuing Solicitation for all Office of Science Programs,  Published December 7, 2000

Name of laboratory:             Stanford Linear Accelerator Center (SLAC)

 

Principal investigator:         Les. Cottrell

Position title of PI:               Assistant Director of SLAC Computing Services

Mailing Address:                 MS 97, SLAC, POB 4349, Stanford, California 94309.

Phone:                                    (650)926-2523

FAX:                                     (650)926-3329

Email:                                      cottrell@slac.stanford.edu

               

Name of Official signing for Laboratory:          Jonathan Dorfan

Title of official:                                                     Laboratory Director

Phone:                                                                    (650)926-8701

FAX:                                                                      (650)926-8705

Email:                                                                      jonathan.dorfan@slac.stanford.edu

 

Requested funding:             Year 1                     Year 2                     Year 3                     Total

SLAC:                                    $225K                     $2375K                     $2510K                     $7130K

PSC:                                        $85K                       $83K                       $76K                       $244K

ANL:                                      $45K                       $45K                       $45K                       $135K    

Total:                                      $355K                     $3653K                     $3721K                     $108729K

Use of human subjects in proposed project:   No

Use of vertebrate animals in proposed project: No

 

Signature of PI:                                                                                     Date:

 

Signature of Official:                                                                            Date:

 


Table of Contents

1     List of Participants. 4

1.1      SLAC.. 4

1.2      PSC.. 4

1.3      ANL. 4

1.4      University of Tennessee at Knoxville. 4

1.5      LBNL/ACIRI 4

2     Abstract 4

3     Background and Significance. 4

3.1      Importance & Relevance. 4

3.2      Gaps to be filled by current proposal 5

3.2.1       PingER.. 5

3.2.2       NIMI 5

3.2.3       Beacon. 5

3.2.4       NWS. 6

3.2.5       IPv6. 6

3.2.6       QoS. 6

3.2.7       NTON.. 6

3.2.8       Ties with other SciDAC proposals. 6

3.2.9       Relevance of current project 7

4     Preliminary Studies. 7

4.1      Other related measurement projects. 7

4.1.1       PingER.. 7

4.1.2       NIMI 8

4.1.3       Beacon. 8

4.2      Experience and Competence. 9

4.2.1       SLAC.. 9

4.2.2       PSC.. 9

4.2.3       ANL. 10

5     Research Design and Methods. 10

5.1      Extension of NIMI Monitoring Capabilities. 10

5.2      Automated analysis and reporting. 11

5.3      NIMI Resource Control 11

5.4      Forecasting. 11

5.5      Deployment 12

5.6      Tentative timetable. 12

5.6.1       First 6 Months. 12

5.6.2       Next 6 Months. 13

5.6.3       2nd year 13

5.6.4       3rd Year 13

5.7      Subcontract or Consortium Arrangements. 14

6     Literature Cited. 14

7     Budget 16

8     Other Support of Investigators. 17

8.1      SLAC.. 17

8.1      PSC and ANL. 17

8.2. 17

9     Biographical Sketches. 18

9.1      Roger Leslie Anderton Cottrell 18

9.1.1       EMPLOYMENT SUMMARY.. 18

9.1.2       EDUCATION SUMMARY.. 18

9.1.3       Narrative. 18

9.1.4       Publications. 18

9.2      Warren Matthews. 20

9.2.1       EMPLOYMENT SUMMARY.. 20

9.2.2       NARRATIVE. 20

9.2.3       RELEVANT PUBLICATIONS. 20

9.3      Andrew Adams. 22

9.3.1       Education: 22

9.3.2       Professional Experience: 22

9.3.3       Professional Societies: 22

9.3.4       Recent Publications: 22

9.4      Gwendolyn L. Huntoon. 23

9.5      Bill Nickless. 25

9.6      Linda Winkler 26

9.6.1       Professional Experience. 26

9.6.2       Education. 26

9.6.3       Recent publications. 27

9.7      Vern Paxson. 28

9.7.1       Selected publications. 28

10       Description of Facilities and Resources. 30

10.1     SLAC Facilities & resources. 30

10.2     PSC Facilities & Resources. 30

10.2.1      Computing Resources: 30

10.2.2      Office: 30

10.3     ANL Facilities & Resources. 30

11       Appendix. 31

11.1     Expression of interest from kc claffy for CAIDA.. 31

11.2     Expression of interest from Vern Paxson. 31

11.3     Expression of Interest from LBNL. 32

11.4     Expression of Interest from Rich Wolski 32

1  List of Participants  4

1.1  SLAC  4

1.2  PSC  4

1.3  ANL  4

1.4  University of Tennessee at Knoxville  4

1.5  LBNL/ACIRI  4

2  Abstract  4

3  Background and Significance  4

3.1  Importance & Relevance  4

3.2  Gaps to be filled by current proposal  5

3.2.1  PingER  5

3.2.2  NIMI  5

3.2.3  Beacon  5

3.2.4  NWS  6

3.2.5  IPv6  6

3.2.6  QoS  6

3.2.7  NTON  6

3.2.8  Ties with other SciDAC proposals  6

3.2.9  Relevance of current project  7

4  Preliminary Studies  7

4.1  Other related measurement projects  7

4.1.1  PingER  7

4.1.2  NIMI  8

4.1.3  Beacon  8

4.2  Experience and Competence  9

4.2.1  SLAC  9

4.2.2  PSC  9

4.2.3  ANL  10

5  Research Design and Methods  10

5.1  Extension of NIMI Monitoring Capabilities  10

5.2  Automated analysis and reporting  11

5.3  NIMI Resource Control  11

5.4  Forecasting  11

5.5  Deployment  12

5.6  Tentative timetable  12

5.6.1  First 6 Months  12

5.6.2  Next 6 Months  13

5.6.3  2nd year  13

5.6.4  3rd Year  14

5.7  Subcontract or Consortium Arrangements  14

6  Literature Cited  14

7  Budget  16

8  Other Support of Investigators  17

8.1  SLAC  17

8.2  PSC  17

8.3  ANL  18

9  Biographical Sketches  19

9.1  Roger Leslie Anderton Cottrell  19

9.1.1  EMPLOYMENT SUMMARY  19

9.1.2  EDUCATION SUMMARY  19

9.1.3  Narrative  19

9.1.4  Publications  19

9.2  Warren Matthews  21

9.2.1  EMPLOYMENT SUMMARY  21

9.2.2  NARRATIVE  21

9.2.3  RELEVANT PUBLICATIONS  21

9.3  Andrew Adams  23

9.3.1  Education:  23

9.3.2  Professional Experience:  23

9.3.3  Professional Societies:  23

9.3.4  Recent Publications:  23

9.4  Gwendolyn L. Huntoon  24

9.5  Bill Nickless  26

9.6  Linda Winkler  27

9.7  Professional Experience  27

9.7.1  Argonne National Laboratory  27

9.7.2  Education  27

9.7.3  Recent publications  28

9.8  Vern Paxson  29

9.8.1  Selected publications (see http://www.aciri.org/vern/papers.html)  29

Wide-Area Traffic: The Failure of Poisson Modeling, IEEE/ACM Transactions on Networking 3(3), June 1995.Rich Wolski. 29

Rich Wolski. 30

10  Description of Facilities and Resources  31

10.1  SLAC Facilities & resources  31

10.2  PSC Facilities & Resources  31

10.2.1  Computing Resources:  31

10.2.2  Office:  31

10.3  ANL Facilities & Resources  31

11  Appendix  32

11.1  Expression of interest from kc claffy for CAIDA  32

11.2  Expression of interest from Vern Paxson  32

11.3  Expression of Interest from Rich Wolski  34

 


 

1          List of Participants

1.1         SLAC

Les Cottrell (PI),Warren Matthews, Doug Chang

1.2         PSC

Andrew Adams, Gwendolyn Huntoon

1.3         ANL

Bill Nickless, Linda Winkler

1.4         University of Tennessee at Knoxville

Rich Wolski

1.5         LBNL/ACIRI

Vern Paxson

2          Abstract

We propose to extend research activities into Internet performance by expanding the reach and functionality of the National Internet Measurement Infrastructure (NIMI) probes. The project will add new and extend existing measurement capabilities for NIMI to improve its ability to capture multicast traffic, measure end-to-end and hop-by-hop bandwidths, provide more details on inter-packet delay variation, out-of-order and duplicate packets. We will leverage and extend the tools developed for the PingER and Beacon projects to enhance NIMIs analysis and reporting capabilities, in particular providing the ability for the user to select measurement type, time window, metric affiliation groups, and to drill down to more detailed tabular and graphical information. We will provide selected data to the Network Weather Service to assist with predicting future performance. Finally, we also plan to extend the measurements to include IPv6 network parameters, and to evaluate the impacts of Quality of Service (QoS). 

Using first the existing and then the new capabilities of NIMI, we will measure both existing paths between ESnet sites and new and existing paths between ESnet sites and ESnet collaborator sites with high performance network links. This will include extremely high performance links such as those provided by the National Transparent Optical Network (NTON) and  transatlantic/transpacific OC-3 to OC-12 links.

This active monitoring system will integrate with passive monitoring efforts and provide an essential component in a complete end-to-end network test and monitoring capability. It will also provide a platform for adding new active measurements from other proposed SciDAC projects.

3          Background and Significance

3.1         Importance & Relevance

The extraordinary network challenges presented by scientists, researchers and in particular high energy nuclear and particle physics (HENP) experiments has created a critical need for network monitoring to understand present performance, set expectations, trouble shoot, and to allocate resources to optimize/improve performance. An infrastructure to provide advanced IP based data transport services is largely in place in the N. American, W. European and Japanese research and education communities. However, there currently does not exist a well defined, always on, systematic, ubiquitous and automated approach to characterizing the quality of services parameters of all the components involved in data transport services from a source to a destination. See for example the recent special editorial on network traffc measurements and experiments [20] for a fairly comprehensive account [Chen00]. Current measurement infrastructure is not able to keep up with the increasing size of, demand for and reliance on the Internet. This problem only gets worse as more demanding applications are developed and deployed.

The growth of high performance, data intensive science applications has dramatically increased the need for reliable and regular network measurement. These applications, including those of HENP experiments such as BaBar [babar] and Atlas [atlas], and the Particle Physics Data Grid (PPDG) [ppdg], depend on the network measurement infrastructure to: trouble shoot the network; set realistic expectations; plan for experiments; and, understand overall performance such as throughput. These requirements include high performance bulk throughput for Terabytes of physics data, data replication, remote backup and archiving, and content distribution such as video streaming. In addition new services such as QoS, and new applications such as interactive voice over IP (VoIP), experiment control, multimedia and multicast applications are increasing the need for new types of measurements including the effects of applying QoS and measuring metrics such as jitter, continuous availability/reachability and multicast. For example: merely detecting reachability loss for multicast services is inherently a 2-dimensional, full-mesh problem. 

3.2         Gaps to be filled by current proposal

The current proposal will bring together and leverage four existing and successful measurement projects, specifically PingER, NIMI, Beacon and NWS to create a measurement infrastructure which will support the growing demands of high performance data intensive science applications.  The proposal also extends NIMI to make measurements of the new IPv6 and QoS enabled networks, and will help validate the scaling of the measurement techniques by extending the measurements to extreme high performance networks such as NTON. The proposal also has close ties to several other separately funded SciDAC proposals that will provide mutual benefit and synergy.

3.2.1           PingER

We will extend the highly successful PingER project, in particular by providing more detailed (more frequent as well as more metrics) information between critical sites with high performance network connections, and by an increased focus on high performance network ESnet sites and sites with strong ESnet site collaborative requirements. It will also provide an alternative measurement technique for paths where ICMP may be restricted [rate] and/or a way to validate whether ICMP rate limiting is being used on a path. At the same time it will leverage the analysis and reporting facilities of PingER.

3.2.2           NIMI

·         The proposal also builds on the NIMI architecture that is now successfully deployed at about 51 sites (including three of the proposer sites: SLAC, PSC & LBNL) in 10 countries. We plan to extend two critical components of the NIMI architecture: first, to add additional resource control mechanisms and second to expand the reporting and analysis capabilities.  For the reporting and analysis component, we will leverage the tools in use in PingER to provide public web access to tabular and graphical historical data, with selection of metrics, time scales, path groupings, and drill down to more details and graphs.

Results will be available publicly via the web. Example uses include:

·         Enable network and applications users to have realistic performance expectations for existing and new applications;

·         Provide trouble shooting assistance by identifying when changes occurred and what the changes were and in some cases what the impact of the changes may have been;

·         Assist in verification that expected levels of service are being met;

·         Provide input to setting and verifying service level agreements (SLA’s);

·         Help decide which path to use when more than one is available, and;

·         Assist in deciding where to locate a remote computing/replication facility. Some success stories of using PingER for the above purposes can be found in [examples].

 

3.2.3           Beacon

The IP Multicast Beacon [beacon] is a real-time monitoring system for IP Multicast reachability, loss, jitter, and time synchronization.  While a real-time monitoring system is incredibly useful for debugging reachability problems, we really need a system to archive the IP Multicast state over time to watch how the service evolves over the long term.  Some failures of the network are immediately evident from real time measurements, but other failures require longer term monitoring of IP Multicast traffic.

The IP Multicast Beacon [beacon] software has provided visibility into the state of deployment of IP multicast across several National backbones (Abilene, ESNet, vBNS) and several dozen local sites (National Laboratories, Universities, and other organizations). Adding Beacon functionality to NIMI probes will reduce the number of network probes necessary at a given site, while increasing the number of instrumented IP Multicast endpoints.

We propose to include IP Multicast Beacon functionality to the AIME framework.  Each IP Multicast-enabled NIMI node will both transmit and receive IP multicast test traffic, as the current IP Multicast Beacon implementation [beacon] does today.  The reachability information gained by these measurements will be archived using the same mechanisms as other NIMI measurements, and made available through the  same data dissemination mechanisms (primarily web pages).

3.2.4           NWS

In addition the integration with the Network Weather Service will provide a base for ESnet application developers to instrument and improve their applications to take advantage of dynamic forecasts of performance characteristics. A key unfunded member of the current proposal’s team (Rich Wolski) is also the chief architect of the Network Weather Service.

 The close contact of key people in this proposal with PPDG application developers will assist in extending mature applications such as parallel FTP to take advantage of the Network Weather Service (NWS) [nws]. In addition, we anticipate that the integration of NWS and NIMI capabilities will generate new network performance analysis and forecasting research results.  This proposal will enable PPDG developers to leverage those results immediately.

3.2.5           IPv6

Another valuable contribution will be IPv6 monitoring. As IPv6 is deployed it will be useful to monitor the performance and peering arrangements. It is proposed that porting NIMI to IPv6 will be investigated and a small number of IPv6 aware NIMI boxes will be deployed.  One of the key collaborators, Warren Matthews, is also a leader in monitoring IPv6 paths [ipv6meas] and SLAC is connected to the ESnet IPv6 testbed.

3.2.6           QoS

We will enable the NIMI measurement tools to mark packets and make measurements over paths that support QoS. The following two SLAC connections will assist in moving this forward: SLAC is connected to the ESnet QoS testbed; SLAC also has a joint (SLAC’s end is not funded) proposal [daresbury] with Daresbury Lab in England to investigate the effectiveness of QoS techniques, which will provide access to a transatlantic QoS controlled bottleneck.

3.2.7           NTON

Instrumenting the NTON with a few NIMI probes will enable us to understand whether and how the probes, the measurements and analysis can be scaled to an extremely high performance (OC48) network, and also assist in providing a better understanding of the end-to-end performance of the NTON. SLAC is an NTON site with an OC48 and has demonstrated > 900Mbits/sec throughput from Dallas to SLAC [sc2k] in November 2000. Also several major SLAC/Babar collaborators are connected to NTON, so there is significant interest in ensuring it works well for their major applications.

3.2.8           Ties with other SciDAC proposals

We will work closely with the LBNL passive measurement proposal team to understand how the active and passive sets of information complement one another and also mutually validate the measurements. In particular, we see considerable advantages in being able to request passive measurements while making some special active measurements. The passive monitoring capability provides a means of observing the traffic as it flows through the network and the active monitoring capability provides the means of generating and controlling that traffic and measuring the end-to-end result obtained. For more on comparing passive  and active measurements and the complementarity of the two mechanisms, see Passive vs. active monitoring [passive].

We will also work closely with other network researchers to follow their developments and to assist with evaluation and integration of their tools into NIMI. Four examples of this, hopefully to be funded separately, are:

·         To add tools and analysis from CAIDA-led Accurate Estimation of End-to-End and Hop-by-Hop Bandwidth Characteristics for measuring end-to-end and hop-by-hop bandwidth characteristics in the face of cross traffic [caida];

·         To integrate the Rice University-led INCITE: Edge-based Traffic Processing and Service Inference fro high-Performance networks [incite] multi-fractal measurement tools into NIMI and make the data available to the researchers;

·         To utilize the dynamic statistical experiment design tools proposed by ORNL, LANL and SLAC in Statistical Analysis and Design Methods for End-to-End Guarantees in Computer Networks to optimize the frequency and pairs being used in the NIMI measurements.

·         To compare and contrast NIMI throughput measurements with simulation predictions such as those proposed in the SLAC-led Optimizing Performance of Throughput by Simulation (OPTS) project.

3.2.9           Relevance of current project

At an upper level, with the infrastructure in use, we expect to be able to assist in the following goals:

·Be able to understand/predict whether and how the application should work from a network standpoint

·Enable ESnet and ESnet collaborator network engineers to make promises that are related to user expectations (today’s promises are mainly best effort)

·If the application should work and it does not, then how does one make it work.

·         If it shouldn't work then what has to be done

·            Provide an understanding of how to build the network infrastructure needed to support a given application

·Make it increasingly likely that an application that should work, does work

·         Reduce the time and effort to fix things:

·         Make sure there are measurements available in the right places

Provide reports that produce information that can be passed onto to another discipline (- e.g. from user to site network engineer to ISP NOC), both for users and network engineers. 

4          Preliminary Studies

4.1         Other related measurement projects

There are several existing Internet active end-to-end measurement projects in place today, including AMP [amp], NIMI [nimi], PingER [MC00], RIPE [ripe], skitter [skitter] Surveyor [surveyor], Beacon [beacon] and the NWS. A comparison of some of the projects [cf] indicates that most restrict themselves to measuring delays (or round trip times (RTT)), losses and routes. NIMI and the NWS on the other hand are mainly envisioned as infrastructures that enable monitoring of both delays and throughput.  In addition the NWS is capable of generating real-time forecasts of future performance levels.

4.1.1           PingER

The existing DoE/MICS sponsored PingER project provides historical RTT, loss, and reachability information for over 3000 pairs of hosts in over 70 countries with data going back more than 5 years. These countries, in year 2000, had over 78% of the world's population and about 99% of the online users of the Internet The PingER project automatically gathers, archives, reduces the raw data into forms suitable for analysis, analyses the data and produces tabular and graphical reports and web pages. The reports are  accessible world wide via WWW forms that will allow users to dynamically customize what information they wish to see, and more statically (for reports that cannot be generated sufficiently quickly to be interactive) by browsing the project's WWW pages. The current proposal should be regarded as complementary to the PingER project. Whereas PingER focuses on light-weight (low network impact) measurements covering a wide range of countries and networks, this proposal focuses on ESnet [esnet] and ESnet collaborator sites with high performance connectivity (i.e. mainly research and education sites in N. America, W. Europe and Japan with link speeds today of at least 10Mbps). For such sites more intensive monitoring is possible (due to the high performance links available) and needed (to characterize today’s high performance applications), than is provided by the low impact PingER monitoring. By more intensive we imply measurements that generate more traffic, such as making measurements more frequently, or performing high throughput measurements.

4.1.2           NIMI

NIMI (National Internet Measurement Infrastructure) is a software system for building network measurement infrastructures.  The project is a joint collaboration with Berkeley Lab, and is currently funded by the Defense Advanced  Research Projects Agency (DARPA) award #A0G205.DARPA. A NIMI infrastructure has two main components:  a set of dedicated measurement servers running on hosts in a network and measurement configuration and control software running on separate hosts.  Key NIMI design goals are: scalable to potentially thousands of NIMI servers; works across administrative and trust boundaries; accommodate and enforces diverse policies governing measurement access; and supports easy delegation of partial measurement access, to encourage different domains to participate in public uses of the infrastructure. NIMI servers queue requests for measurement for some future point in time, execute the measurement when its scheduled time arrives, store the results for retrieval by remote measurement clients, and delete the results when told to do so. NIMI does NOT presume a particular set of measurement tools.  Instead, NIMI servers have the notion of a measurement module, which can reflect a number of different measurement tools. Security is a central architectural concern: all access is via public key credentials.  The owner of a server can determine who has what type of access by controlling to whom they give particular credentials.

The existing NIMI project has successfully developed and deployed a pilot infrastructure to about 51 sites, mainly universities in the US, plus some universities and institutions in W. Europe and Korea. Today, very few (2) of these sites are directly on ESnet. NIMI is based on a collection of measurement probes that cooperatively measure the properties of Internet paths and clouds by exchanging test traffic amongst themselves. It provides: decentralized control of measurements; strong authentication and security; mechanisms for both maintaining tight administrative control over who can perform what measurements using which probes; delegation of some forms of measurements; and simple configuration and maintenance of probes. The NIMI probes can make ping, traceroute, throughput and muliticast measurements, and the platform is designed to make it simple to include other active measurement tools. Though the project has produced many important studies, currently there are no regularly updated NIMI measurement results available, on the Web for public access.

4.1.3           Beacon

RTPMON [rtpmon] was used to show IP Multicast reachability and loss rates during early deployment of Access Grid nodes [accessgrid].  Unfortunately, RTPMON assumes that the IP Multicast service of an internetwork is working correctly.  The first attempt to overcome this limitation was to modify the IP multicast user applications to send their RTCP messages by IP Unicast to a central RTPMON receiver.  This technique quickly indicated nodes not served by IP Multicast.

A suitably modified version of the vic [vic] software was quickly packaged up and sent to Access Grid sites that were experiencing IP Multicast deployment problems.  Even in this rough state, the operational data provided in real time to network engineers was invaluable: for the first time, the network engineers could directly see the effect of changes across the whole set of participating nodes in an IP Multicast group.

Operationally, however, this initial capability was less than ideal for several reasons:

·         The modified vic required that the user of the host computer remain logged

·         The modified version of vic had very specific host requirements in terms of operating system and support programs.

·         The modified RTPMON was useful only on the machine where the modified vic clients were programmed to send their reports.

NLANR [nlanr] agreed to implement a Java [java] based software package--known as the IP Multicast Beacon--to replace the modified vic and RTPMON setup.  The “write once, run anywhere” features of Java eliminated the host dependencies that plagued the earlier RTPMON/vic approach.  Each instrumented node requires only a Java runtime, which often comes bundled with the operating system.  The central collection point exports its data in real time as a web-browser accessible page, no longer requiring special software for network engineers and managers to access the data.

The existing NLANR Beacon software provides real-time representation of IP Multicast reachability, loss, delay, and jitter.  But it does not provide any history mechanism to view these variables from moment to moment.

4.2         Experience and Competence

4.2.1           SLAC

SLAC is the home site for the BaBar experiment/collaboration and is associated with several other projects requiring high network performance such as the SSRL/SDSC collaboration and the GLAST collaboration. ANL is a Globus core site and is also a major collaboration site for the LHC/Atlas project. Both ANL and SLAC are PPDG collaborators with one of the PIs for the PPDG being at SLAC, and the SLAC PI for the current proposal is currently a member of the PPDG. The close association with HENP user and developer communities requiring high network performance and with the PPDG and other Grid activities will provide a fertile ground for understanding needs and requirements, trying out ideas, and providing feedback.

SLAC is home site for the highly successful IEPM/PingER active Internet End-to-end Performance monitoring project, arguably the most extensive project of its kind in the world today. The leader of the IEPM/PingER project and its PI are a key member and a PI of the current proposal.

4.2.2           PSC

The Pittsburgh Supercomputing Center (PSC) is the lead institution on the NSF-sponsored Web100 project. The goal of Web100 is to drive the wide deployment of advanced TCP implementations that can  support high performance applications without intervention from network  experts.  This is accomplished through extensive internal instrumentation of the stack itself plus TCP autotuning.  This will permit simple tools  to "autotune" TCP as well as diagnose other components of the overall

system such as inefficient applications and malfunctioning Internet paths.  Web100 will bring better observability to these last two classes of problems, leading to long term overall improvements in both applications and the underlying infrastructure.

 

PSC has participated in the Engineering Services (ES) group of the  NSF-sponsored NLANR project for over six years. Through NLANR ES,  PSC provides ongoing engineering and technical support to NSF-funded  High-Performance Connection (HPC) sites as well as the broader  high-performance networking community. NLANR ES staff provide expert consulting for HPC campus network engineers, develop documentation  on emerging HPN technologies, provide tools for diagnosing network  performance, and provide a forum for disseminating information to the  high-performance network community through the NLANR/I2 Joint Techs  Workshops. The NLANR ES performance tuning WWW page  http://www.psc.edu/networking/perf_tune.html is an authoritative source

for information on how to hand-tune a wide variety of operating systems for use over the national high-performance network backbones.

 

PSC has made a number of contributions to TCP technology.  They were the first  authors on RFC2018, "TCP Selective Acknowledgement Options", which enable the TCP sender to know exactly which data is missing at the receiver. Their ongoing research on TCP congestion control has resulted in the development  of several algorithms that can improve TCP performance in high-performance  environments [MM96, MSJ99]. They also developed the base model for TCP bulk

performance [MSMO97], have made substantial contributions to the IETF  IP Performance Metrics working group [Ma96, RFC2330, MA99, and Mat99] and did groundbreaking work in TCP Autotuning [SMM98].

4.2.3           ANL

ANL has been driving deployment of IP Multicast in the networking community as a side effect of the Access Grid [accessgrid] deployment effort.  We have successfully deployed IP Multicast as part of the basic infrastructure at major networking conferences such as iGrid and SC, where Access Grid installations were demonstrated and used.  We have helped push IP Multicast into institutions ranging from National Laboratories to Native American tribal colleges. 

Our primary diagnostic tool for this IP Multicast deployment work has been the IP Multicast Beacon [beacon], and based on our experience we believe in the acute need for an archive of Beacon state that can be queried over time.  Getting IP Multicast deployed on a “hero” basis is one thing, but ensuring it stays up and stable over time requires even more sophisticated tools than we have today.

5          Research Design and Methods

Our proposal has five main research goals:

a.        the enhancement of NIMI monitoring/measurement capabilities to include new, or extend existing measurements of vital network performance characteristics such as end-to-end and hop-by-hop bandwidth estimation, inter-packet delay variation,  duplicate and out of order packet delivery frequencies, cross-traffic estimation, and multicast performance;

b.       research into and development of tools to provide automated analysis, and reporting tools with the generation of web accessible pages providing the user with interactive access to results for selected metrics, measurement methods, time-scales, affinity aggregation groups, and with drill down to graphical plots and more detailed results;

c.        expansion of NIMI resource control capabilities, including research into how to allocate system resouces in order to support multiple measurement studies without adversely effecting measurement results.

d.       the provision of forecasting capabilities within the monitoring framework, and

e.        the deployment of NIMI probes to selected high performance sites of critical interest to the ESnet community.

The resulting tools and NIMI probes will serve both as an invaluable resource enabling the development of new HENP applications and as a network research tool of unmatched scope and capability.

5.1         Extension of NIMI Monitoring Capabilities

WeE will begin the project using the existing NIMI tools to measure: unicast delay, loss, and jitter;  multicast routes; traceroutes; TCP throughput and bulk transfer capacity. The NIMI probes will also contain a packet filter that will enable looking at the probe’s traffic only (for privacy/security reasons). We will research and extend these tools to measure inter packet delay variability, and reordering. We will also investigate pathologies such as duplicate packets, conditional loss probabilities (i.e. the probability that if one packet is lost the next packet is also lost) and the distributions of numbers of adjacent packets lost (the conditional loss probabilities and gap metrics defined in [bolot]). We plan to research and add new tools to NIMI for measuring bottleneck bandwidth, cross-traffic, and to examine dynamically optimizing the measurement frequencies and which sites are monitored using statistical experiment design techniques. We also plan to integrate the Beacon functionality into the NIMI framework. In addition to simple integration, we will integrate the data generated by the Beacon system into the NIMI long-term database. We will extend the deployment and measurements to include some IPv6 [ipv6] paths and paths that support QoS. We will also work with ESnet folks to investigate the use of NIMI to look at link utilization at site border routers by accessing the SNMP MIBs in read only mode. On-demand measurements of bulk throughput capability will also be added to help understand and trouble shoot bulk throughput applications, and to validate more simple/lightweight bulk-throughput estimators (e.g. simulation). Other measurements such as HTTP, may be added depending on demand.

5.2         Automated analysis and reporting

Currently, measurement results produced via the NIMI measurement infrastructrue are packaged and shipped to a predetermined repository referred to as the Data Analysis Client (DAC). Post-processing of the results, when performed, are done outside of the NIMI architecture. In order to generate real-time feedback for the user, we propose to implement automated post-processing on the DAC. This work will include first research on and then development of analysis profiles to control the post-processing of an individual measurement orf subsets of a measurement study. Moreover, we will research, test and  develop estimators for the data, and to summarize and visualize the estimators and produce web accessible reports that will include tabular time series similar to those provided by PingER [report]. The reports will allow user selection of metric (e.g. delay, loss, jitter, throughput, reachability), time scales (both the time separation of the adjacent points, and the window in time being reported on), paths (both the source and destination and grouping by affinities such as collaboration, geographical region, Internet Service Provider (ISP)). The user will also be able to sort the data by simply clicking on column headings. In addition the reports will provide user drill down to display time series plots and frequency histogram details for individual paths and groups of paths.  The tabular data will also be exportable to applications such as Excel to enable customized analysis and reports to be generated by interested users. In addition traceroute history information will be summarized and made available. We will further study and research the results to see how the estimators scale to ultra high speed networks, understand how to visualize the more frequent measurements (compared to PingER), research, develop and test computationally efficient new estimators, and compare and contrast the NIMI results with simpler, more lightweight mechanisms. The raw traceroute data will also be made available to assist with research efforts elsewhere, such as the network tomography research proposed as part of the  INCITE: Edge-based Processing and Service Inference for High-Performance Networks project proposal.

5.3         NIMI Resource Control

Although the current NIMI infrastructure can support multiple measurement studies, researchers who initiate the measurements must coordinate their efforts in order to avoid biases produced by concurrently running measurement tools.  Moreover, the NIMI probes themselves can become unstable if  aggressive or poorly architected measurement tools consume system resources. To mitigate the need for researcher pre-communication, and to make the NIMI probes more robust, we intend to research, develop and implement several architectural improvements in resource control and enforcement, including:

·         Fine-grained access control, to limit the resources (i.e., packets on the wire, disk space usage) on a per-NIMI probe, per-user basis.

·         Inter-domain monitoring via the NIMI Configurationentral Point of Contactrol (CPOC), where the CPOC periodically polls all NIMI probes within its administrative domain for system resources.  This will act as an early warning system for the human administrator.

·         Measurement tool resource profiles (e.g., ports, protocol, Berkeley Packet Filters (BPFs)), to allow the scheduler to check for competing resources and compensate accordingly.

·         Generic packet filter, being developed at LBNL, to act as an enforcement proxy between the measurement tools and the BPF devices.

·         Measurement expirations, including a fine-grained notion (i.e., how long to attempt to deliver a measurement, how long to keep a result around on local disk), as well as more granular expirations (i.e., over the entire measurement study per NIMI probe).

5.4         Forecasting

We plan to provide selected data to the NWS to assist with predicting future performance. While network monitoring alone is critical to network capacity-planning and diagnostic activities, if HENP applications are to use the resulting data for scheduling, a forecast of future performance levels is required.  The most sophisticated network performance analysis techniques available today indicate that the observed performance of critical network links can change rapidly.  If application scheduling decisions are based on recently-observed performance conditions the scheduler must assume that the conditions will persist until the application executes.  If conditions change between the time that the schedule is developed and the application executes, the schedule will be based on assumptions of resource availability that are no longer true.  That is, the schedule is developed for performance that was, but no longer is, available when the application eventually executes.  To support application scheduling, then, the performance system must be able to develop predictions of future performance levels.  As such, we plan to provide an interface between NIMI performance monitoring facilities and the NWS.  We will use this interface both to study the problem of real-time throughput forecasting from network-level performance measurements, and to support HENP application scheduling.

5.5         Deployment

We propose to extend the deployment of the existing NIMI infrastructure, (The NIMI project is based on work sponsored by the Defense Advanced Research Projects Agency award #AOG205) originally funded by Defense Advanced Research Projects Agency (DARPA) and the National Laboratory for Applied Network Research (NLANR), to enable gathering of data from sites of critical interest to ESnet, HENP, and this proposal.  We will strategically deploy the NIMI probes to optimize their usefulness at measuring unicast and multicast traffic (i.e, outside of firewalls).  Initially, additional probes will be administered by the existing NIMI team. Administrative support consists of; installing and initially configuring the software, monitoring the system, upgrading and adding measurement tools, and delegating researchers access to all or parts of the system.  Eventually, as the architecture improves (mainly as outlined in section 5.3), the system should be robust enough to partition the ESnet NIMI infrastructure into one or more logical administration domains where each domain would then be responsible for administering the NIMI probes contained within. The creation of a separate ESnet domain is in keeping with  NIMI's fundamental design goal to support administratively heterogeneous infrastructures.

We believe that there are several projects that would like to deploy network measurement boxes at critical points in the network thus we propose to deploy boxes (hardware, operating system and network connectivity) that can be used by the other projects. Other measurement proposals would be able to make use of the probe hardware, operating system and network interface to perform their measurements.

We also plan to extend the current project to instrument the National Transparent Optical network (NTON) [nton] testbed with our tools to ensure the tools and estimators scale to the next generation high speed networks and to assist with understanding the NTON performance.

The data from the NIMI probes will be automatically gathered by a central archive machine that will retrieve and store the measurements in a file system. The archive machine can also host some default data analysis client/applications, however in the interests of scalability and simplicity we plan to separate this task out to other hosts. Reports from the applications will be made available via the web so there will also be a web site host with powerful tools to enable a user to navigate to find the information of interest.

5.6           Tentative timetable

5.6.1           First 6 Months

5.6.1.1       SLAC

The first phase (roughly months 1 to 6) of the work for SLAC will be, with some assistance from Vern Paxson and PSC, to study and understand NIMI in detail. In particular this will include the existing measurement tools, how they work (scheduling and how they probe the network), what they record and how (formats, file hierarchies, databases). We will also review the existing data extraction, analysis and reporting tools. To assist in this, we plan that there will be some face-to-face meetings between the implementers, to facilitate getting the project started and initiate good relationships between the people involved. As we gain an understanding of the recorded data we will start to extend the analyses and reporting. Given our experience with ping this may start with ping by adding conditional loss probability, packet length gap distributions, reordering and duplicate packet reporting. We will extend the PingER analysis tools and infrastructure to incorporate existing NIMI ping measurements inter-packet delay and the above additions. At the same time we will be working with two or three other friendly sites (including ANL and the University of Tennessee at Knoxville) to agree upon and arrange for early deployment of NIMI probes at their sites.

5.6.1.2       PSC

PSC will start to do research on how improve the remote manageability/control of the NIMI measurements.

5.6.1.3       ANL

The ANL team will start to look at how to utilize the existing NIMI multicast measurements and to evaluate how to add Beacon functionality. The University of Tennessee team will develop an initial interface between the NIMI performance gathering infrastructure and the NWS forecasting subsystem.

5.6.2           Next 6 Months

5.6.2.1       SLAC

During the next phase (roughly months 7 to 12), the SLAC team will extend the reporting of NIMI data to include the throughput and bandwidth measurements. During this phase we will investigate various existing tools (e.g. [traceping], [surveyor]) for analyzing and reporting on the NIMI traceroute information we have gathered, choose one and use it to make our traceroute information available. SLAC will also work, in some cases with other proposals, to start to look at how to extend the NIMI measurements, for example by adding a hop-by-hop or a bottle-neck or end-to-end bandwidth estimation capability such as pchar [pchar], pipechar [pipechar], nettimer [nettimer] and / or working with CAIDA to add hop-by-hop and end-to-end bandwidth measurements [pathrate].

5.6.2.2       PSC

PSC will extend the NIMI probes to SciDAC High Performance network measurement/analysis sites such as ORNL, Rice, SDSC, Caltech, UTK, FNAL, BNL, Jlab, SDSC and Wisconsin. They will also start to deploy and test the new remote management tools to the NIMIs.

5.6.2.3       ANL

The ANL team will start to integrate Beacon into its NIMI and provide analysis and reporting. The University of Tennessee team will deploy an enhanced NWS capable of serving NIMI-gathered data via the NWS, and will begin the study of new forecasting capabilities.

5.6.3           2nd year

5.6.3.1       SLAC

In the second year, SLAC will add IPv6 measurements, analysis and reporting to the NIMI suites. We will also work with LBNL to understand how to tie together the passive and active measurements, e.g. how to co-schedule them, how to compare, contrast and co-validate the measurements. In addition we will study the extra information available from the joint measurements and learn what they are and can tell us, and study how to get the most value from them.

5.6.3.2       PSC

PSC will complete the deployment on NIMIs started earlier to the remaining SciDAC sites. We will include some IPv6, and non US sites (e.g. IN2P3 in Lyon France, a UK Lab such as RAL or DL, KEK in Japan, and an INFN site in Italy) in this deployment, the latter to assist in transatlantic / transpacific measurements.

5.6.3.3       ANL

ANL will also extend the measurements/analysis/reporting suites to add multicast measurements. During this period we will evaluate if and how to integrate our measurements with those from Surveyor or other measurement projects.

5.6.4           3rd Year

In the third year, we will carefully validate and compare the measurements versus other mechanisms, and add new measurement suites based on our experiences and the experiences of others. We will also investigate providing access to the measurements and analyses to trouble shooting tools to make problem isolation much easier for non-expert users. We will evaluate how to provide a smooth transition to a production service as opposed to a project, put together documentation of procedures, resources needed, come up with a plan for transition, and work with others (as identified) to help identify (e.g. assist in proposal writing) how to provide an ongoing production service.

5.7          Subcontract or Consortium Arrangements

This project is a joint effort between SLAC, PSC and ANL, with unfunded assistance from Rich Wolski of the University of Kentucky and Vern Paxson of ACIRI/LBNL.

PSC is the main development and administration site for the NIMI. PSC will procure, configure and deploy and manage the extra NIMI probes. This will ensure a common base (hardware and software) with the existing NIMIs, which will make future management simpler. Since the PSC is the repository of expertise on NIMI they will also provide assistance to other researchers and developers to assist in modifying existing NIMI tools and adding new ones.  PSC will also research how to enhance the NIMI architecture in order to provide automated result analysis and more robust resource control (i.e. minimizing measurement bias, stabilizing the system).

Linda Winkler and Bill Nickless of ANL continue to push IP Multicast deployment as part of the Access Grid [accessgrid] project.  ANL will integrate basic IP Multicast Beacon [beacon] functionality into the NIMI software, and will go on to put Beacon data into the NIMI data collection so that it can be accessed over time.

Rich Wolski of the University of Kentucky is the main architect of the Network Weather Service (NWS). The synergy between the current proposal and the NWS will enable this proposal to provide measurements to the NWS and the NWS to analyze and use its existing tools to provide real-time forecasts back to the HENP and ESnet developers.  

6          Literature Cited

[accessgrid]: http://www.mcs.anl.gov/accessgrid

[amp]: http://amp.nlanr.net/AMP/

[atlas]: http://www1.cern.ch/Atlas/Welcome.html

[babar]:http://www.slac.stanford.edu/BFROOT/

[beacon]: http://dast.nlanr.net/projects/beacon

[bolot]: G. Bolot, Characterizing End-to-End Packet delay and Loss in the Internet,  Journal of High-Speed Networks, vol 2, no. 3, pp. 305-323, Dec 1993.                               

[caida]: http://www.caida.org/~kc/Props01/doeBW.html

[cf]: http://www.slac.stanford.edu/comp/net/wan-mon/iepm-cf.html

[Chen00]: T. M. Chen. Network traffic measurement and experiments: Guest Editorial. IEEE Communications Magazine, page 120, May 2000.

[daresbury]: http://dlitd.dl.ac.uk/public/i2qos/

[esnet]: http://www.es.net/

[examples]             http://www-iepm.slac.stanford.edu/pinger/uses.html

[ipv6]: http://www.ipv6.org/

[ipv6meas]: http://www.slac.stanford.edu/grp/scs/net/talk/ipv6-oct00/IPv6-2000_files/v3_document.htm

[java]: http://www.java.sun.com

[Ma96] M. Mathis, Diagnosing Internet Congestion with a Transport Layer Performance Tool, Proceedings of INET'96, June 1996, Montreal, Quebec, Canada.

 [MA99] M. Mathis, M. Allman, Empirical Bulk Transfer Capacity, Internet-draft: draft-ietf-ippm-btc-framework-02.txt, Work in progress revised October 1999.

[Mat99] M. Mathis, TReno Bulk Transfer Capacity, Internet-draft: draft-ietf-ippm-treno-btc-03.txt, Work in progress revised February 1999.

[MC00]: W. Matthews & R. Cottrell, The PingER Project: Active Internet Performance Monitoring for the HENP Community, IEEE Communications, 38(5), May 2000.

 [mcast]: A. Adams et. al. The Use of End-to-end Multicast Measurements for Characterizing Internal Network Behavior, IEEE Communications, 38(5), May 2000.

[MM96] M. Mathis, J. Mahdavi, TCP Rate-Halving with Bounding Parameters, Technical Note, October 1996.

[MSJ99] M. Mathis, J. Semke, J. Mahdavi, The Rate-Halving Algorithm for TCP Control, Internet Draft: draft-mathis-tcp-ratehalving-00.txt, August 1999, Currently in "Last Call" for experimental standard RFC status.

[MSMO97] M. Mathis, J. Semke, J. Mahdavi, T. Ott, The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm, Computer Communication Review, volume 27, number 3, pp. 67-82, July 1997.

[nettimer]: http://mosquitonet.stanford.edu/~laik/projects/nettimer/

[nimi]: http://www.ncne.nlanr.net/nimi/

[nlanr]: http://www.nlanr.net/

[nton]: http://www.ntonc.org/

[nws]: http://nws.npaci.edu/NWS/

[pchar]: http://www.employees.org/~bmah/Software/pchar/

[passive]: http://www.slac.stanford.edu/comp/net/wan-mon/passive-vs-active.html

[pathrate]: http://www.caida.org/outreach/papers/consti.pdf

 [pipechar]: http://www-didc.lbl.gov/pipechar/

[ppdg]: http://www.cacr.caltech.edu/ppdg/

[rate]: http://www-iepm.slac.stanford.edu/monitoring/limit/limiting.html

[report]: http://www.slac.stanford.edu/cgi-wrap/pingtable.pl?tick=monthly&from=PPDG&to=PPDG

[RFC2018] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow, TCP Selective Acknowledgement Options, Internet Request for Comments 2018 (rfc2018.txt) October 1996.

[ripe]: http://www.ripe.net/ripencc/mem-services/ttm/index.html

[rtpmon]: http://bmrc.berkeley.edu/~drbacher/projects/mm96-demo/

[sc2k]: http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2k.html

[skitter]: http://www.caida.org/tools/measurement/skitter/

[SMM98] J. Semke, J. Mahdavi, M. Mathis, Automatic TCP Buffer Tuning, ACM SIGCOMM '98 Computer Communication Review, volume 28, number 4, October 1998.

[surveyor]: http://www.advanced.org/csg-ippm/

[traceping]: http://av1.physics.ox.ac.uk/www/traceping_description.html

[vic]: http://www-mice.cs.ucl.ac.uk/multimedia/software/vic/

 

 


7          Budget

Insert the budget spreadsheets and justifications here.

 

 


 

8          Other Support of Investigators

8.1         SLAC

Les Cottrell expects support from this proposal - 10%. Other requested SciDAC support: Consortium for End-to-End Network Assurance Performance Research - 10%; Statistical Analysis and Design Methods for End-to-End Guarantees in Computer Networks – 10%; Edge-based Traffic processing and Service Inference from High-Performance Networks – 10%; Optimizing Performance for Throughput by Simulation – 10%.

Les Cottrell currently receives support from SLAC Computer Services - 80% and from IEPM/PingER project – 10%, and 10% from the PPDG project.  The latter will go away the end of Fiscal year 2001.

Warren Matthews expects support from this proposal – 10%. Other requested SciDAC support: Consortium for End-to-End Network Assurance Performance Research - 10%; Statistical Analysis and Design Methods for End-to-End Guarantees in Computer Networks – 10%; Edge-based Traffic processing and Service Inference from High-Performance Networks – 10%; Optimizing Performance for Throughput by Simulation – 10%.

Warren Matthews currently receives support from SLAC Computer Services – 10% and from IEPM/PingER project – 90%.

1.1         PSC and ANL

8.2          

Andrew Adams expects support from this proposal -50%.  Andrew Adams currently receives support (50%) for NIMI on a subcontract to LBL (DAPRA funding), through September 2001.  He also receives support (30%) for NIMI as part of the NSF Funded NLANR Engineering Services project as well as provides production networking support (30%) for the PSC managed Pittsburgh GigaPoP.

Gwendolyn Huntoon is not receiving any support for the project, but will provide general managerial and administrative oversight for PSC's component of the project as part of her role as Assistant Director at PSC.Insert attachments here


ANL

See attachment.


9          Biographical Sketches

 

9.1         Roger Leslie Anderton Cottrell

 

Stanford Linear Accelerator Center,  Mail Stop 97, P.O. Box 4349, Stanford, California 94309

Telephone:

(650) 926 2523

Fax:

(650) 926 3329

E-Mail:

cottrell@stanford.edu

 

9.1.1            EMPLOYMENT SUMMARY

 

 

 

 

Period

Employer

Job Title

Activities

1982 on

Stanford Linear

Accelerator Center

Assistant Director,

SLAC Computing Services

 

Management of  networking,

telecommunications

and computing

1980-82

Stanford Linear

Accelerator Center

Manager,

 SLAC Computer Network

Management of all SLAC’s

network computing activities

1979-80

IBM U.K. Labs.,

Hursley, England

Visiting Scientist

Graphics and intelligent

distributed  Workstations

1967-79

Stanford Linear

Accelerator Center

Staff Physicist

Inelastic e-p scattering

experiments,  physics and

computing

1972-72

CERN

Visiting Scientist,

Staff Physicist

Split Field Magnet

experiment

1972-73

CERN

Visiting Scientist

Split Field Magnet experiment

9.1.2           EDUCATION SUMMARY

Period

Institution

Examinations

 

1962-67

Manchester University

Ph.D

Interactions of Deuterons

with Carbon Isotopes

1959-62

Manchester University

B.Sc. Hons.

Physics

9.1.3            Narrative

I joined SLAC as a research physicist in High Energy Physics, focusing on real-time data acquisition and analysis in the Nobel prize winning group that discovered the quark. In 1973/3, I spent a year's leave of absence as a visiting scientists at CERN in Geneva, Switzerland, and in 1979/80 at the IBM U.K. Laboratories at Hursley, England, where I obtained United States Patent 4,688,181 for a a dynamic graphical cursor. I am currently the Assistant Director of the SLAC Computing Services group and lead the computer networking and telecommunications areas. I am also a member of the Energy Sciences Network Site Coordinating Committee (ESCC) and the chairman of the ESnet Network Monitoring Task Force. I was a leader of the effort that, in 1994, resulted in the first Internet connection to mainland China. I am also the leader/PI of the DoE sponsored Internet End-to-end Performance Monitoring (IEPM) effort, and the ICFA network monitoring working group.

9.1.4           Publications

The full list of over 50 publications is readily available from online databases.  I include here only a limited number of recent publications relevant to computing.

INTERNATIONAL NETWORK CONNECTIVITY AND PERFORMANCE, THE CHALLENGE FROM HIGH-ENERGY PHYSICS. By Warren Matthews, Les Cottrell, Charles Granieri (SLAC). SLAC-PUB-8382, Mar 2000. 18pp.  Talk presented at the Internet2 Spring Meeting, Washington D.C., 27 Mar 2000.

INTERNET END-TO-END PERFORMANCE MONITORING FOR THE HIGH-ENERGY NUCLEAR AND PARTICLE PHYSICS COMMUNITY. By Warren Matthews, Les Cottrell (SLAC). SLAC-PUB-8385, Feb 2000. 10pp.  Presented at Passive and Active Measurement Workshop (PAM 2000), Hamilton, New Zealand, 3-4 Mar 2000.

1-800-CALL-H.E.P.: EXPERIENCES ON A VOICE OVER IP TEST BED. By W. Matthews, L. Cottrell (SLAC), R. Nitzan (Energy Sciences Network). SLAC-PUB-8384, Feb 2000. 5pp.  Presented at International Conference on Computing in High Energy Physics and Nuclear Physics (CHEP 2000), Padova, Italy, 7-11 Feb 2000.

PINGER: INTERNET PERFORMANCE MONITORING: HOW NO COLLISIONS MAKE BETTER PHYSICS. By W. Matthews, L. Cottrell (SLAC). SLAC-PUB-8383, Feb 2000. 5pp.  Presented at International Conference on Computing in High Energy Physics and Nuclear Physics (CHEP 2000), Padova, Italy, 7-11 Feb 2000.

DISCUSSANT REMARKS ON SESSION: STATISTICAL ASPECTS OF MEASURING THE INTERNET. Br R. Les. Cottrell,  published in Proceedings of the 30th Symposium on the Interface, (ISBN 1-886658-05-6).

INTERNET MONITORING IN THE HEP COMMUNITY.By Warren Matthews, Les Cottrell (SLAC), David Martin (Fermilab). SLAC-PUB-7961, Oct 1998. 8pp. Presented at International Conference on Computing in High-Energy Physics

WHAT IS THE INTERNET DOING FOR AND TO YOU? By R.L.A. Cottrell, C.A. Logg (SLAC), D.E. Martin (Fermilab). SLAC-PUB-7416, Jun 1997. 7pp.  Talk given at Computing in High-Energy Physics (CHEP 97), Berlin, Germany, 7-11 Apr 1997. 

WRITING WORLD WIDE WEB CGI SCRIPTS IN THE REXX LANGUAGE. By R.L.A. Cottrell (SLAC). SLAC-PUB-7122, Mar 1996. 18pp.  Talk presented at the SHARE Technical Conference, 3-8 Mar 1996, Anaheim, CA.

NETWORK RESOURCE AND APPLICATIONS MANAGEMENT AT SLAC. By C.A. Logg, R.L.A. Cottrell (SLAC). SLAC-PUB-7057, Feb 1996. 14pp.  Networld + Interop Engineers Conference, 3-4 Apr 1996, Las Vegas, NV. 

DISTRIBUTED COMPUTING ENVIRONMENT MONITORING AND USER EXPECTATIONS.  By R.L.A. Cottrell, C.A. Logg (SLAC). SLAC-PUB-95-7008, Nov 1995. 7pp.  Contributed to International Conference on Computing in High Energy Physics (CHEP95), Rio de Janeiro, Brazil, 18-22 Sep 1995.  Published in CHEP 95:537-543 (QCD201:T5:1995) 

BABAR TECHNICAL DESIGN REPORT. By BaBar Collaboration (D. Boutigny et al.). SLAC-R-0457, Mar 1995. 618pp.

NETWORK MANAGEMENT AND PERFORMANCE MONITORING AT SLAC. By C.A. Logg, R.L.A. Cottrell (SLAC). SLAC-PUB-95-6744, Mar 1995. 9pp. Presented at Networld + Interop Conference, Las Vegas, NV, 27-31 Mar 1995. 

ADVENTURES IN THE EVOLUTION OF A HIGH BANDWIDTH NETWORK FOR CENTRAL SERVERS. By Karl L. Swartz, Les Cottrell, Marty Dart (SLAC). SLAC-PUB-6567, Aug 1994. 8pp.  Presented at USENIX Association's 8th Large Installation System Administration Conference (LISA VIII), San Diego, CA, 19-23 Sep 1993. 

NETWORKING WITH CHINA. By R.L.A. Cottrell, Charles Granieri (SLAC), Lan Fan, Rong-Sheng Xu (Beijing, Inst. High Energy Phys.), Yukio Karita (KEK, Tsukuba). SLAC-PUB-6478, Apr 1994. 5pp.  Contributed to Computing in High Energy Physics (CHEP 94), San Francisco, CA, 21-27 Apr 1994.  In *San Francisco 1994, Computing in High Energy Physics '94* 192-195. Lawrence Berkeley Lab. - LBL-35822 (94,rec.Feb.95) 192-195. 

NETWORK MANAGEMENT, STATUS AND DIRECTIONS. By R.L.A. Cottrell, T.C. Streater (SLAC). SLAC-PUB-5913, Aug 1992. 4pp.  Presented at 10th International Conference on Computing in High Energy Physics (CHEP 92), Annecy, France, 21-25 Sept 1992.

 ANALYSIS OF NETWORK STATISTICS. By R.Leslie Cottrell (SLAC). SLAC-PUB-4234, Feb 1987. 40pp.  Invited talk given at Conf. on Computing in High Energy Physics, Asilomar, Calif., Feb 2-6, 1987.  Published in Comput.Phys.Commun.45:93-109,1987 Also in Asilomar Computing H.E.Phys.1987:93 (QCD201:T5:1987) (or see above page in: Comput.Phys.Commun.45,1987.


9.2         Warren Matthews

 

Stanford Linear Accelerator Center

Mail Stop 97,

2575 Sand Hill Road,

Menlo Park, California 94025

Telephone:

(650) 926 5373

Fax:

(650) 926 3329

E-Mail:

warrenm@slac.stanford.edu

 

 

9.2.1            EMPLOYMENT SUMMARY

 

 

 

 

Period

Employer

Job Title

Activities

 

 

 

 

1997 on

Stanford Linear

Accelerator Center

Principal Network

Specialist

Network Performance Monitoring,

Network Administration.

1995-1997

Netcom Internet

Technical Support

Trouble-Shooting.

 

 

 

 

EDUCATION SUMMARY

 

 

 

 

 

 

Period

Institution

Examinations

 

 

 

 

 

1992-1995

Brunel University

Ph.D.

A Search for Lepton Flavour Violating Tau Decays at OPAL

1991-1992

Sussex University

M.Sc.

Astronomy

1988-1991

Lancaster University

B.Sc.

Physics

9.2.2           NARRATIVE

Warren joined SLAC as a network specialist in 1997. He is the lead developer for the pingER software used to analyse network performance data gathered by the IEPM effort. He performs wide ranging data analysis related to achieving the networking needs of the High Energy Physics Community. He is active in a number of  Internet Engineering Task Force (IETF) working groups related to understanding and improving network performance. He is also SLAC’s BGP expert and works hard to implement the academic research results to the production network.

He is active with the ESnet site co-ordinating committee and the ESnet IPv6 working group. He is a member of the  ESnet VoIP  testbed group and a member of the Internet2 End-to-end initiative, VoIP and IPv6 working groups. His work involves close ties with numerous networks and groups concerned with end-to-end performance. He liases with both research groups such as NTON and commercially-interested groups such as the Cross-Industry Working Team (XIWT).

9.2.3           RELEVANT PUBLICATIONS

ACTIVE AND PASSIVE MONITORING ON A RESEARCH NETWORK. By Warren Matthews, Les Cottrell and Davide Salomoni (SLAC). SLAC-PUB-8776, February 2001. 5pp. To be Presented at Passive and Active Measurement Workshop (PAM 2001), Amsterdam, the Netherlands, April 22-23 2001.

IPV6 PERFORMANCE AND RELIABILITY. By Warren Matthews and Les Cottrell. SLAC-PUB-8642, October 2000. 6pp. Presented at IPv6 2000 Conference, Washington D.C., October 19-20 2001.

THE PingER PROJECT: ACTIVE INTERNET PERFORMANCE MONITORING FOR THE HENP COMMUNITY, IEEE Communications Magazine on Network Traffic Measurements and Experiments, May 2000.

INTERNATIONAL NETWORK CONNECTIVITY AND PERFORMANCE, THE CHALLENGE FROM HIGH-ENERGY PHYSICS. By Warren Matthews, Les Cottrell, Charles Granieri (SLAC). SLAC-PUB-8382, Mar 2000. 18pp.  Presented at the Internet2 Spring Meeting, Washington D.C., March 27 2000.

INTERNET END-TO-END PERFORMANCE MONITORING FOR THE HIGH-ENERGY NUCLEAR AND PARTICLE PHYSICS COMMUNITY. By Warren Matthews, Les Cottrell (SLAC). SLAC-PUB-8385, Feb 2000. 10pp.  Presented at Passive and Active Measurement Workshop (PAM 2000), Hamilton, New Zealand, March 3-4 2000.

1-800-CALL-H.E.P.: EXPERIENCES ON A VOICE OVER IP TEST BED. By W. Matthews, L. Cottrell (SLAC), R. Nitzan (Energy Sciences Network). SLAC-PUB-8384, Feb 2000. 5pp.  Presented at International Conference on Computing in High Energy Physics and Nuclear Physics (CHEP 2000), Padova, Italy, February 7-11 2000.

PINGER: INTERNET PERFORMANCE MONITORING: HOW NO COLLISIONS MAKE BETTER PHYSICS. By W. Matthews, L. Cottrell (SLAC). SLAC-PUB-8383, Feb 2000. 5pp.  Presented at International Conference on Computing in High Energy Physics and Nuclear Physics (CHEP 2000), Padova, Italy, February 7-11 2000.

INTERNET MONITORING IN THE HEP COMMUNITY. By Warren Matthews, Les Cottrell (SLAC), David Martin (Fermilab). SLAC-PUB-7961, October 1998. 8pp. Presented at International Conference on Computing in High-Energy Physics


9.3         Andrew Adams

 

Pittsburgh Supercomputing Center
4400 Fifth Avenue
Pittsburgh, PA 15213
akadams@psc.edu, (412)268-5142

Birth Place: Orange, NJ
Birth Date: January 5, 1963

9.3.1           Education:

M.S. in Information Science, University of Pittsburgh, 1991 (Magna Cum Laude)
B.S. in Information Science, University of Pittsburgh, 1989

9.3.2           Professional Experience:

1995 - present, Network Engineer, Networking, Pittsburgh Supercomputing Center
1993 - 1995, Application Programmer, Common Knowledge: Pittsburgh, PSC
1991 - 1993, Senior User Consultant, User Services, PSC

9.3.3           Professional Societies:

Internet Engineering Task Force
Internet Society
Usenix

9.3.4           Recent Publications:

A System for Flexible Network Performance Measurement, A. Adams, M. Mathis, INET 2000, Proceedings, July 2000.

The Use of End-to-end Multicast Measurements for Characterizing Internet Network Behavior, A. Adams, T. Bu, R. Caceres, N. Duffield, T. Friedman, J. Horowitz, F. Lo Presti, S.B. Moon, V. Paxson, D. Towsley, IEEE Communications, Vol.38, No.5, May 2000.

Experiences with NIMI, V. Paxson, A. Adams, M. Mathis, Passive and Active Measurement Workshop 2000, Proceedings, April 2000.

Creating a Scalable Architecture for Internet Measurement, Andrew Adams, Jamshid Mahdavi, Matthew Mathis, and Vern Paxson, INET`96.

An Architecture for Large-Scale Internet Measurement, Paxson, V., Mahdavi, J., Adams, A., and Mathis, M., IEEE Communications, Vol.36, No.8, pp 48-54, August 1998.


9.4         Gwendolyn L. Huntoon

Pittsburgh Supercomputing Center

4400 Fifth Avenue

Pittsburgh, PA 15213

huntoon@psc.edu

 

Education:

M.S. in Electrical Engineering, Northeastern University, 1985

B.A. in Mathematics, Bowdoin College , 1983 (Magna Cum Laude)

Professional Experience:

1999-present, Assistant Director, Pittsburgh Supercomputing Center (PSC)

1995-1999, Manager, Networking, PSC

1992-1995, Coordinator, Networking, PSC

1991-1992, Acting Manager, Networking and Operations, PSC

1989-1991, Network Engineer, Networking and Operations, PSC

1985-1989, Staff Engineer, CNR Incorporated, Needham, MA

1983-1985, Staff Assistant, EE Department, Northeastern University

Professional Societies:

Internet Engineering Task Force

IEEE

Recent Publications:

“The NSF Supercomputing Centers' joint response to the Request for Public Comment ... next generation of the NSFne”t (with R. Butler, B. Chinoy, M. Hallgren, G. Hastings, M. Mathis, P. Love, and P. Zawada), August 3, 1992.

"Deployment of a HiPPI-based Distributed Supercomputing Environment at the Pittsburgh Supercomputing Center," J. Mahdavi, G .L. Huntoon, M. Mathis, March 1992, Proc. of the Workshop on Heterogeneous Processing, pp. 93-96.

"Distributed High Speed Computing," Mahdavi, Huntoon, and Mathis, Proceedings of the Third Gigabit Testbed Workshop, Jan 13-15, 1992, pp 223-231.

"DHSC Performance Bottleneck: Current Progress", Mahdavi, Huntoon and Mathis, Proceedings of the Third Gigabit Testbed Workshop, Jan 13-15, 1992, pp 337-380.

"Connecting Heterogeneous Supercomputers with a High Speed Network,", Matt Mathis and G.L. Huntoon, February 1991, Proceedings of the Second Gigabit Testbed Workship," Volume 2, pp. 231-234.

"Distributed High Speed Computing (DHSC) Software Tools", Huntoon, G.L, Mathis M and J. Mahdavi, Proceedings of the Second Gigabit Testbed Workshop, Feb 13-15, Volume 2, pp 451-457.

"Solution Manual to Introduction to Digital Signal Processing by John G. Proakis and Dimitris G. Manolakis, Macmillan Publishing Company", A. El-Jaroudi and G. L. Huntoon, New York, 1988.

Synergistic Activities:

Web100 (Sept 2000 - Present).  A NSF funded collaborative project with NCAR and NCSA to develop a software suite that will enable ordinary users to attain full network data rates without requiring help from networking experts.  The software suite, initially developed for LINUX platforms, will automatically tune the end host network stack for the underlying network environment. The software suite will also provide network diagnostics and measurement information.

NLANR Engineering Services (1997 - Present).  As part of the NSF funded National Laboratory of Applied Network Research (NLANR), provide in-depth information and technical support for connecting to and effectively using high performance wide area networks, to campus network engineers, gigapop operators and other high performance networking professionals. NLANR ES is co-sponsor with Internet2 of the NLANR/I2 Joint Technical Workshops that provide a forum for disseminating up-to-date information on high performance networking technologies, services and applications to the high performance networking community.

TCP Enhancements Project  (1995-2000).  Ongoing research on developing enhancements to the TCP protocol. Specifically, our work on TCP congestion control has resulted in the development of several algorithms that can improve TCP performance in high-performance environments, including contributions to RFC2018 "TCP Selective Acknowledgement Options".

NCNE GigaPoP (1995- Present).  Manage and operates the NCNE gigaPoP, a regional network aggregation point providing high speed commodity and research network access to sites in Western and Central Pennsylvania and West Virginia.  The focus for the gigaPoP is to provide cost effective, high capacity, state-of-the-art network connectivity to the university community.

Common Knowledge: Pittsburgh (1992-1997). A collaboration between the Pittsburgh Public School District, the University of Pittsburgh and the Pittsburgh Supercomputing Center, to incorporate network and computing technology into the curriculum within the school district. Provided technical expertise for defining, designing, implementing and supporting network and computing infrastructure based on the curriculum requirements within the school district. 


9.5         Bill Nickless

Argonne National Laboratory                                                           Office:      +1 630 252 7390

Mathematics and Computer Science Divsiion                Fax:       +1 603 719 8967

9700 South Cass Avenue Building 221                            Email:     nickless@mcs.anl.gov

Argonne, IL 60439

 

EDUCATION

BS in Computer Science, Andrews University, Berrien Springs, Michigan, 1991

 

PROFESSIONAL EXPERIENCE

Experimental Systems Engineer, Argonne National Laboratory, 1991 to present.

 

Mr. Nickless has been working on IP routing for over five years.

 

For the past two years he has been focusing on internetwork IP multicast deployment.  This work has included deployment of IP multicast at Argonne National Laboratory, between Argonne and other sites and networks, and at other sites funded by the National Science Foundation's Access Grid project.  He has been instrumental in bringing IP Multicast deployment up to a usable state for multi-site interactive audiovisual conferences at sites ranging from National Laboratories to major State universities to Native American Tribal colleges.

 

As part of the NSF Access Grid project, he provided specifications to the National Laboratory for Applied Network Research that resulted in the IP Multicast Beacon.  Nearly fifty Access Grid nodes and other interested parties are running the Beacon code on a continuous basis worldwide.

 

Mr. Nickless is currently active in the Multicast Source Discovery Protocol working group in the Internet Engineering Task Force, and has recently authored two Internet Drafts for modifications and enhancements to MSDP.

 

Recent publications:

 

J. P. Navarro, B. Nickless, and L. Winkler.  "Combining Cisco NetFlow Exports with Database

        Technology for Usage Statistics, Intrusion Detection, and Network Forensics."  14th Systems

         Administration Conference, USENIX, 3-8 December 2000, New Orleans, Louisiana USA

 

B. Nickless.  "An Alternative Way To Stop MSDP Loops Through Caching."  Internet Draft.

 

B. Nickless.  "An MSDP Query Protocol."  Internet Draft.


9.6         Linda Winkler

Linda Winkler

Argonne National Lab

9700 South Cass Ave

Argonne, Il 50639

(630) 252-7236

_______________________________________________________________

9.6.1           Professional Experience

Argonne National Laboratory

9.6.1.1         Network Engineer 1999 – present

¨       Responsible for selection of appropriate advanced networking technologies in support of Mathematics and Computer Science related research. 

¨       Responsible for engineering and implementation of Argonne’s complex external networking activities.

¨       Continue network engineering activities for TransPAC, STARTAP and StarLight.

¨       WAN engineering for iGRID demonstration at INET 2000.

9.6.1.2         Manager Network Technologies 1998 -1999

¨       Responsible for selection of appropriate networking technologies for implementation on the Argonne campus-wide network.

¨       Implemented ATM-based campus backbone network.

¨       Responsible for Work for Others contract with Indiana University to participate in the National Science Foundation High Performance International Internet award to connect the Asian Pacific Advanced Network to US high performance research and education networks.

¨       Technical Director of the Metropolitan Research and Education Network (MREN).

¨       Consulting on engineering issues for the STARTAP.

9.6.1.3         Computer Scientist 1982 - 1998

¨       Investigated various leading-edge networking technologies in support of scientific and administrative computing.

¨       Architected international experimental network for the 1995 IWAY demonstrated at SC’95.

¨       Expertise in the design and management of network infrastructures supporting (OSPF, BGP, IP multicast, ATM, SONET).

¨       Responsible for design of local area networks in support of administrative and scientific computing environment for R&D Laboratory.  Provided multiprotocol support includes AppleTalk, DECNET, IPX and TCP/IP.   Experience with various physical media including Ethernet, FDDI and ATM.  Responsible for coordinating routing and bridging for Laboratory-wide networks.  Responsible for coordinating external wide-area production Internet as well as research connectivity.       

¨       Led technical committee of network engineers from BigTen universities in coordination connectivity via CICnet.

9.6.2           Education

¨       Master of Science in Management, Purdue University, 1983.

¨       Bachelor of Science in Computer Information Systems, Purdue University, 1980.

9.6.3           Recent publications

J. P. Navarro, B. Nickless, and L. Winkler.  "Combining Cisco NetFlow Exports with Database Technology for Usage Statistics, Intrusion Detection, and Network Forensics."  14th Systems Administration Conference, USENIX, 3-8 December 2000, New Orleans, Louisiana USA

V. Sander, I. Foster, A. Roy, L. Winkler, “A Differentiated Services Implementation for High Performance TCP Flows.” Accepted for TERENA Networking Conference 2000, Lisbon, Portugal.  http://www-fp.mcs.anl.gov/qos/papers/diff.pdf

I. Foster, A. Roy, V. Sander, L. Winkler, “End-to-End Quality of Service for High-End Applications.”  Submitted to IEEE Journal on Selected Areas in Communications Special Issue on QoS in the Internet, 1999. http://www-fp.globus.org/documentation/incoming/ieee.pdf

R. Carlson, L. Winkler, “RFC2583 Guidelines for Next Hop Client (NHC) Developers.” May 1999.

ftp://ftp.isi.edu/in-notes/rfc2583.txt

 

 


9.7         Vern Paxson

Dr. Vern Paxson

Vern Paxson is a Staff Scientist in the National Energy Research Scientific Computing Division of Lawrence Berkeley National Laboratory, and a senior scientist at the AT&T Center for Internet Research at the International Computer Science Institute in Berkeley (ACIRI).  He served as a principle investigator of the National Internet Measurement Infrastructure (NIMI) project, funded by the NSF, and currently as a PI for the DARPA-sponsored Multicast Inference of Network Characteristics (MINC) project, which continues development of NIMI through FY01.  Dr. Paxson's expertise is in Internet measurement and network intrusion detection.  His thesis on measurements and analysis of end-to-end Internet dynamics was awarded U.C. Berkeley's Sakrison Memorial Prize for outstanding dissertation research, and one of the four papers from it that appeared in SIGCOMM

(and SIGMETRICS) won the SIGCOMM best student paper award.  The journal version of the paper was awarded the IEEE Communications Society William R. Bennett Prize.  His paper on the Bro intrusion detection system was awarded the USENIX Security Symposium's Best Paper award, and he has been a co-recipient of two USENIX Lifetime Achievement awards, for major contributions to the Software Tools project (1996) and for contributions to BSD Unix (1993).  He also won best-of-show in the 1992 IOCCC.  He serves on the editorial board of IEEE/ACM Transactions on Networking, and has served on numerous program committees, including SIGCOMM for 1997-2001.  He is co-chair of the SIGCOMM 2002 conference.  In 1998-2000 he served (with Scott Bradner of Harvard University) on the Internet Engineering Steering Group as Area Director for Transport for the Internet Engineering Task Force.  He has co-authored 10 RFCs and chaired IETF working groups on Internet Performance Metrics (ippm), TCP implementation

(tcpimpl), and Endpoint Congestion Management (ecm).

 

9.7.1           Selected publications

(see http://www.aciri.org/vern/papers.html)

Difficulties in Simulating the Internet, S. Floyd & V. Paxson, to appear in IEEE/ACM Transactions on Networking, August 2001.

Detecting Backdoors, Y. Zhang and V. Paxson, Proc. 9th USENIX Security Symposium, August 2000.  Awarded best student paper.

Detecting Stepping Stones, Y. Zhang and V. Paxson, Proc. 9th USENIX Security Symposium, August 2000.

Experiences with NIMI, V. Paxson, A. Adams, and M. Mathis, Proc. Passive & Active Measurement: PAM-2000.

The Use of End-to-end Multicast Measurements for Characterizing Internal Network Behavior, A. Adams et al, IEEE Communications 38(5), May 2000.

End-to-End Internet Packet Dynamics, V. Paxson, IEEE/ACM Transactions on Networking 7(3), June 1999.

Bro: A System for Detecting Network Intruders in Real-Time, Computer Networks 31(23-24), December 1999.

TCP Congestion Control, M. Allman, V. Paxson, and W. Stevens, RFC 2581, Proposed Standard, April 1999.

Framework for IP Performance Metrics, V. Paxson et al, RFC 2330, May 1998.

An Architecture for Large-Scale Internet Measurement, V. Paxson et al, IEEE Communications 36(8), August 1998.

Where Mathematics meets the Internet, W. Willinger and V. Paxson, Notices of the American Mathematical Society 45(8), August 1998.

End-to-End Routing Behavior in the Internet, IEEE/ACM Transactions on Networking 5(5), October 1997.

Wide-Area Traffic: The Failure of Poisson Modeling, IEEE/ACM Transactions on Networking 3(3), June 1995.
Rich Wolski

See attachment.


10     Description of Facilities and Resources

10.1     SLAC Facilities & resources

SLAC has an OC3 Internet connection to ESnet, an OC12 connection to Stanford University and thus to CalREN/Internet 2, and an experimental OC48 connection to the National Transparent Optical Network. The latter has been used at SC2000 to demonstrate bulk-throughput rates from Dallas Texas to SLAC of over 990 Mbits/second. SLAC is also part of the ESnet QoS pilot with a 3.5Mbps ATM PVC to LBNL, and SLAC is connected to the IPv6 testbed.

SLAC hosts network measurement hosts from the following projects: AMP, NIMI, RIPE and Surveyor. We are working with CAIDA to install a skitter measurement probe at SLAC. SLAC has two GPS aerials and connections to provide accurate time synchronization.

The SLAC data center contains a Sun E10000 supercomputer with 64 symmetric multiprocessors. In addition there are large (over 1500 processors) farms of computer servers. In addition the high performance data-handling systems provide access to over 74 TB of disk and a capacity of about 2 PB of automated access tape storage.

10.2       PSC Facilities & Resources

Network connectivity to PSC is provided via the Pittsburgh Gigapop.  High Performance network connectivity includes an OC-12 Abilene connection, OC-3 vBNS connection as well as access to an OC-48 connection to HSCC (DARPA funded High Speed Connection Consortium).  PSC is one of four core nodes on the Abilene/I2 IPv6 testbed and is an active participant in I2 working groups including QoS and multicast.  Commercial Internet connectivity consists of over 300 Mbps of bandwidth. Current Network Service Providers include an OC-3 to AT&T Worldnet, OC-3 to Uunet and fractional DS-3 to Sprint.

PSC is one of the lead organizations in the NIMI project, hosting multiple NIMI probes as well as the NIMI WWW site.  PSC also participates in the NLANR AMP measurement project.

10.2.1       Computing Resources:

PSC has extensive computing facilities ranging from the NSF-funded Terascale Computing System (TCS) to a Cray T3E to workstation clusters.  These facilities are briefly described below.

The initial TCS system consists of 64 interconnected Compaq ES40 Alphaservers, each of which house four EV67 microprocessors.  When completed the full-scale TCS will consist of 682 or more Compaq Alphaservers, more powerful than the ES40s of the initial system, each housing four EV68 microprocessors.

The PSC’s massively parallel Cray T3E supercomputer consists of 544 DEC Alpha processors, each running at 450 MHz, and is capable of providing a peak computational rate of 460 Gflops. Archival storage for this system is provided by a ten processor Cray J90 system equipped with extensive IO capabilities.

PSC is building a cluster of 20 four-processor XEON servers linked by a high-speed interconnect fabric. Phase 1 of this cluster (ten servers, 100 Mbps Ethernet) is now in production service. 

PSC provides other high end computing facilities including a 12 processor SGI Challenge-XL, a 4 processor SGI Onyx, a 4 processor AlphaServer 8400 5/300 system and a number of powerful workstations.

10.2.2       Office: 

The computational facilities, supercomputer, associated front ends and peripheral equipment are housed at the Westinghouse Electric Corporation Energy Center in Monroeville, PA under a subcontract agreement.  The PSC headquarters are located approximately 20 kilometers west of the Westinghouse facility at the Mellon Institute building on the campus of Carnegie Mellon adjacent to the University of Pittsburgh.

10.3     ANL Facilities & Resources

Argonne National Laboratory is positioned very well for NIMI node  development and testing.  ANL gets its primary Internet service by way of  an OC-12 ATM service provided by ESNet, and will soon have a separate OC-12  ATM service to the Chicago NAP where ANL peers with such national and  international networks as Abilene, vBNS, TRANSPAC, STARTAP, and  CA*Net.  ANL is the lead institution on a State of Illinois funded dark  fiber network being developed for the Chicago area, with an additional link  to the University of Illinois at Urbana Champaign.  In addition to these network facilities, ANL is funded to work on such network-based  applications as the Access Grid, with its heavy emphasis on IP Multicast  service.  Other network-based applications and toolkits from ANL include Globus, GriPhyN, and MPI.

11     Appendix

11.1     Expression of interest from kc claffy for CAIDA

From: k claffy [kc@ipn.caida.org]

Sent: Wednesday, March 07, 2001 3:00 PM

To: Cottrell, Les

Cc: margaret; tesa@caida.org

Subject: Re: draft email for our proposal

 

Les:

 

Deploying and testing the proposed CAIDA bandwidth estimation tools

to probes located at high performance ESnet and ESnet collaborator

sites, as well as to probes on the periphery of extremely high

bandwidth networks such as NTON will greatly assist in understanding

and validating the scaling properties of the proposed tools/algorithms.

We would like, therefore, to collaborate with the IEPM/PingER team

and the proposed "Active Internet Measurement for ESnet" team

to assist in deploying, integrating and configuring the tools

in their measurement probes. CAIDA will also provide access to the

data to assist in understanding how to achieve optimim bulk

throughput for high energy physics applications.

 

kc

11.2     Expression of interest from Vern Paxson

From: Vern Paxson [vern@ee.lbl.gov]

Sent: Sunday, March 11, 2001 1:55 AM

To: Cottrell, Les

Subject: support for AIME proposal

 

Dear Les,

 

This note is to confirm my intention to participate (as an unfunded

collaborator) in the SLAC-led "Active Internet Measurement for ESNET"

(AIME) proposal.  As one of the principle architects of NIMI, I will be

able to provide expert consulting and guidance.  I look forward to the

research and development efforts to expand the NIMI tools and core

components to support ongoing operational use of NIMI for status monitoring

and longitudinal analysis of the ESNET infrastructure.  Please let me

know of any additional way by which I can be of assistance to this project,

which I find highly interesting and important to the evolution of the

NIMI infrastructure.

 

                Thanks,

                                Vern

Dr. Vern Paxson

AT&T Center for Internet Research

    at the International Computer Science Institute (ACIRI)

and

Lawrence Berkeley National Laboratory

510-666-2882 (ACIRI)

510-486-7504 (LBNL)

510-666-2956 (FAX)

vern@aciri.org, vern@ee.lbl.gov

http://www.aciri.org/vern

11.3     Expression of Interest from LBNL

From: Brian Tierney [bltierney@lbl.gov]

Sent: Tuesday, March 13, 2001 10:10 AM

To: Cottrell, Les

Cc: Deborah Agarwal

Subject: [Fwd: RE: scidac]

 

Dear Les,

 

This note is to confirm the interest of the proposed "Self-Configuring, Network

Monitor" project in the SLAC-led "Active Internet measurement for ESnet (AIME)"

proposed project. We believe the integration of our passive monitoring system

with your team's active monitoring efforts will provide an essential component

in a complete end-to-end network test and monitoring capability. We hope to

collaborate closely with your active measurement proposal team to understand how

the passive and active sets of information complement one another and also

mutually validate one another. In particular, we see considerable advantages in

being able to request passive measurements while making some special active

measurements.

 

Brian L. Tierney

PI of the LBNL-led "Self-Configuring Network Monitor" proposal

------------------------------------------------------------------------

  Brian L. Tierney,   Lawrence Berkeley National Laboratory (LBNL)        

  1 Cyclotron Rd.  MS: 50B-2239,  Berkeley, CA  94720 

  tel: 510-486-7381    fax: 510-495-2998   efax: 603-719-5047

  bltierney@lbl.gov   http://www-didc.lbl.gov/~tierney 

 

 currently on leave from LBNL to CERN:

  CERN, IT/PDP, Bldg 31, 2-013, 1211 Geneva 23, Switzerland

  tel: +41 22 76 74543  fax: +41 22 76 77155

11.4     Expression of Interest from Rich Wolski

See attachment(See attachment)