08.05.15
Departments of Computer Science / Physics and Astronomy
University of Missouri@Columbia
Title: Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths are Transforming e-Science
Columbia, MO
1 of 51
Downloaded 17 times
More Related Content
Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths are Transforming e-Science
1. “Metacomputer Architecture of the Global LambdaGrid:
How Personal Light Paths are Transforming e-Sciencequot;
Invited Talk
Departments of Computer Science / Physics and Astronomy
University of Missouri@Columbia
Columbia, MO
May 15, 2008
Dr. Larry Smarr
Director, California Institute for Telecommunications and
Information Technology
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
2. Abstract
I will describe my research in metacomputer architecture, a term I coined in
1988, in which one builds virtual ensembles of computers, storage, networks,
and visualization devices into an integrated system. Working with a set of
colleagues, I have driven development in this field through national and
international workshops and conferences, including SIGGRAPH,
Supercomputing, and iGrid. Although the vision has remained constant over
nearly two decades, it is only the recent availability of dedicated optical paths,
or lambdas, that has enabled the vision to be realized. These lambdas enable
the Grid program to be completed, in that they add the network elements to the
compute and storage elements which can be discovered, reserved, and
integrated by the Grid middleware to form global LambdaGrids. I will describe
my current research in the four grants in which I am PI or co-PI, OptIPuter,
Quartzite, LOOKING, and CAMERA, which both develop the computer science
of LambdaGrids, but also couple intimately to the application drivers in
biomedical imaging, ocean observatories, and marine microbial metagenomics.
3. Metacomputer:
Four Eras
• The Formative Years (1965-1985)
• The Early Days (1985-1995)
• The Emergence of the Grid (1995-2000)
• From Grid to LambdaGrid (2000-2005)
• The OptIPlanet Collaboratory (2005-2010)
4. TV and Movies of 40 Years Ago
Envisioned Telepresence Displays
Source: Star Trek 1966-68; Barbarella 1968
6. My Early Research was on Computational Astrophysics
Before There Were National Supercomputer Centers
Eppley and Smarr 1977
Hawley and Smarr 1985
Norman, Winkler, Smarr, Smith 1982
8. The First Metacomputer:
NSFnet and the Six NSF Supercomputers
NSFNET 56 Kb/s Backbone (1986-8) CTC
NCAR JVNC
PSC
NCSA
SDSC
9. NCSA Telnet--“Hide the Cray”
One of the Inspirations for the Metacomputer
• NCSA Telnet -- Interactive Access
– From Macintosh or PC Computer
– To Telnet Hosts on TCP/IP Networks
• Allows for Simultaneous
Connections
– To Numerous Computers on The Net
– Standard File Transfer Server (FTP)
– Lets You Transfer Files to and from
Remote Machines and Other Users
John Kogut Simulating
Quantum Chromodynamics
He Uses a Mac—The Mac Uses the Cray
Source: Larry Smarr 1985
10. From Metacomputer to TeraGrid and OptIPuter:
15 Years of Development
“Metacomputer”
Coined by Smarr
in 1988
TeraGrid OptIPuter
PI PI
1992
11. Long-Term Goal: Dedicated Fiber Optic Infrastructure
Using Analog Communications to Prototype the Digital Future
“What we really have to do is eliminate distance between
individuals who want to interact with other people and SIGGRAPH 1989
with other computers.”
― Larry Smarr, Director, NCSA
Illinois
Boston
“We’re using satellite technology…to demo
what It might be like to have high-speed
fiber-optic links between advanced
computers in two different geographic locations.”
― Al Gore, Senator
Chair, US Senate Subcommittee on Science, Technology and Space
12. The Bellcore VideoWindow --
A Working Telepresence Experiment
(1989)
“Imagine sitting in your work place lounge having coffee with some colleagues.
Now imagine that you and your colleagues are still in the same room, but are
separated by a large sheet of glass that does not interfere with your ability to
carry on a clear, two-way conversation. Finally, imagine that you have split the
room into two parts and moved one part 50 miles down the road, without
impairing the quality of your interaction with your friends.”
Source: Fish, Kraut, and Chalfonte-CSCW 1990 Proceedings
13. NCSA Mosaic, a Module in NCSA Collage Desktop
Collaboration Software, Led to the Modern Web World
Licensing 100 Commercial
Licensees
NC
SA
Pr
1993 og
ram
me
NCSA Collage rs
1990
Open
Source
Source: Larry Smarr
14. NCSA Web Server Traffic Increase Led to
NCSA Creating the First Parallel Web Server
Peak was 4 Million Hits per Week!
1993 1994 1995
Data Source: Software Development Group, NCSA,
Graph: Larry Smarr
16. I-WAY Prototyped the National Metacomputer
-- Supercomputing ‘95 I-WAY Project
• 60 National & Grand Challenge
Computing Applications
• I-Way Featured:
– IP over ATM with an OC-3 (155Mbps) Backbone
– Large-Scale Immersive Displays
– I-Soft Programming Environment
Cellular Semiotics
– Led Directly to Globus
UIC
CitySpace
http://archive.ncsa.uiuc.edu/General/Training/SC95/GII.HPCC.html
Source: Larry Smarr, Rick Stevens, Tom DeFanti
17. Caterpillar / NCSA: Distributed Virtual Reality
for Global-Scale Collaborative Prototyping
Real Time Linked Virtual Reality and Audio-Video
Between NCSA, Peoria, Houston, and Germany
1996
www.sv.vt.edu/future/vt-cave/apps/CatDistVR/DVR.html
18. Concept of NCSA Alliance
National Technology Grid
1997
155 Mbps vBNS
Image From LS Talk at Grid Workshop Argonne Sept. 1997
Image from Jason Leigh, EVL, UIC
19. From Metacomputing to the Grid
1998 Science Portals & Workbenches
Twenty-First Century Applications
P
e
Access Computational r
Grid Grid f
o
Access Services & Computational Services
r
Technology
m
Grid Services a
(resource independent) n
c
Grid Fabric e
(resource dependent)
“A source book for the history
of the future” -- Vint Cerf Networking, Devices and Systems
www.mkp.com/grids
20. Extending Collaboration From Telephone Conference Calls
to Access Grid International Video Meetings
1999 Can We Create Realistic Telepresence
Using Dedicated Optical Networks?
Access Grid Lead-Argonne
NSF STARTAP Lead-UIC’s Elec. Vis. Lab
22. States are Acquiring Their Own Dark Fiber Networks --
Illinois’s I-WIRE and Indiana’s I-LIGHT
1999
Today Two Dozen
State and Regional
Optical Networks
Source: Larry Smarr, Rick Stevens, Tom DeFanti, Charlie Catlett
23. Dedicated Optical Channels Makes
High Performance Cyberinfrastructure Possible
(WDM)
10 Gbps per User ~ 200x
Shared Internet Throughput
c* f
Source: Steve Wallach, Chiaro Networks
“Lambdas”
Parallel Lambdas are Driving Optical Networking
The Way Parallel Processors Drove 1990s Computing
24. National Lambda Rail (NLR) Provides
Cyberinfrastructure Backbone for U.S. Researchers
Links Two
Dozen State and
Regional Optical
Networks
NLR 4 x 10Gb Lambdas Initially
Capable of 40 x 10Gb wavelengths at Buildout
26. Two New Calit2 Buildings Provide
New Laboratories for “Living in the Future”
• “Convergence” Laboratory Facilities
– Nanotech, BioMEMS, Chips, Radio, Photonics
– Virtual Reality, Digital Cinema, HDTV, Gaming
• Over 1000 Researchers in Two Buildings
– Linked via Dedicated Optical Networks
UC Irvine
www.calit2.net
Preparing for a World in Which
Distance is Eliminated…
28. To Build a Campus Dark Fiber Network—
First, Find Out Where All the Campus Conduit Is!
29. Current UCSD Experimental Optical Core:
Ready to Couple to CENIC L1, L2, L3 Services
Goals by 2008:
CENIC L1, L2
>= 50 endpoints at 10 GigE Services
>= 32 Packet switched
>= 32 Switched wavelengths
Lucent
>= 300 Connected endpoints
Glimmerglass
Approximately 0.5 TBit/s
Arrive at the “Optical” Center
of Campus
Switching will be a Hybrid
Combination of:
Packet, Lambda, Circuit --
Force10
OOO and Packet Switches
Already in Place
Funded by
NSF MRI
Grant
Cisco 6509
OptIPuter Border Router
Source: Phil Papadopoulos, SDSC/Calit2
(Quartzite PI, OptIPuter co-PI)
30. The OptIPuter Project: Creating High Resolution Portals
Over Dedicated Optical Channels to Global Science Data
Scalable
Adaptive
Graphics
Environment
(SAGE)
$13.5M
Over
Five
Years
Picture
Source:
Mark
Ellisman,
David Lee,
Jason Leigh
Calit2 (UCSD, UCI) and UIC Lead Campuses—Larry Smarr PI
Univ. Partners: SDSC, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST
Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
31. Special issue of Communications of the ACM (CACM):
Blueprint for the Future of High-Performance Networking
• Introduction
– Maxine Brown (guest editor)
• TransLight: A Global-scale LambdaGrid for e-
Science
– Tom DeFanti, Cees de Laat, Joe Mambretti,
Kees Neggers, Bill St. Arnaud
• Transport Protocols for High Performance
– Aaron Falk, Ted Faber, Joseph Bannister,
Andrew Chien, Bob Grossman, Jason Leigh
• Data Integration in a Bandwidth-Rich World
– Ian Foster, Robert Grossman
• The OptIPuter
– Larry Smarr, Andrew Chien, Tom DeFanti,
Jason Leigh, Philip Papadopoulos
• Data-Intensive e-Science Frontier Research
– Harvey Newman, Mark Ellisman, John Orcutt
Source: Special Issue of Comm. ACM 2003
32. OptIPuter Software Architecture--a Service-Oriented
Architecture Integrating Lambdas Into the Grid
Distributed Applications/ Web Services
Visualization
Telescience SAGE JuxtaView
Data Services
LambdaRAM Vol-a-Tile
Distributed Virtual Computer (DVC) API
DVC Configuration DVC Runtime Library
DVC Services DVC Job DVC
Scheduling Communication
DVC Core Services
Resource Namespace Security High Speed Storage
Identify/Acquire Management Management Communication Services
Globus
PIN/PDC GRAM GSI XIO RobuStore
Discovery
and Control GTP XCP UDT
I
Lambdas P CEP LambdaStream RBUDP
33. My OptIPortalTM – Affordable
Termination Device for the OptIPuter Global Backplane
• 20 Dual CPU Nodes, 20 24” Monitors, ~$50,000
• 1/4 Teraflop, 5 Terabyte Storage, 45 Mega Pixels--Nice PC!
• Scalable Adaptive Graphics Environment ( SAGE) Jason Leigh, EVL-UIC
Source: Phil Papadopoulos SDSC, Calit2
35. The Calit2 200 Megapixel OptIPortals at UCSD and UCI
Are Now a Gbit/s HD Collaboratory
NASA Ames Visit Feb. 29, 2008
Calit2@ UCI wall
Calit2@ UCSD wall
NASA Ames is Completing a 245 Mpixel Hyperwall
as Project Columbia Interface
36. Rocks/CGLX OptIPortal
Uses OpenGL Hardware Accelerated
• CGLX features:
– Cross-Platform Hardware Accelerated Open GL Application CGLX Tools
Rendering CGLX (cglXlib)
– Synchronized Multi-Layer OpenGL AGL GL GLX
CARBON X Driver
Context Support
– Distributed Event Management Graphics Hardware
– Scalable Multi Display Support MAC OS 10 LINUX (UNIX)
Network Layer Cluster Layer Render Node Layer
High Performance Network Dsp. 0
Serial Mode Dsp. 1
Dsp.
MAC OS10 Event Queue
2
Wall
event 2 store
event 1
event 0
current event
store
Linux 64bit
Wall
Threaded
Mode
Dsp. 0 Dsp. 1 Dsp. 2
Source: Kai-Uwe Doerr, Falko Kuester, Calit2
38. The Genetic Diversity of Ocean Microbes Provides Novel
Genetic Components for Bioengineering Clean Energy
Plus 155
Marine
Microbial
Each Sample
Genomes
~2000 Specify
Microbial Ocean Data
Species
Sorcerer II Data Will Double
Number of Proteins in GenBank!
39. Calit2’s Direct Access Core Architecture
An OptIPuter Metagenomics Metacomputer
Sargasso Sea Data
Sorcerer II Expedition Dedicated
(GOS) Compute Farm Traditional
User
(1000s of CPUs)
JGI Community
W E B PORTAL
Sequencing Project
+ Web Services
Moore Marine Data- Request
10 GigE
Microbial Project Base Fabric Response
Farm
NASA and NOAA
User
Satellite Data Environment
Flat File
Community Microbial
Server Direct
StarCAVE
Metagenomics Data Access
Farm Lambda Varrier
Cnxns
OptIPortal
TeraGrid: Cyberinfrastructure Backplane
(scheduled activities, e.g. all by all comparison)
(10,000s of CPUs)
Source: Phil Papadopoulos, SDSC, Calit2
40. CAMERA’s Global Microbial Metagenomics CyberCommunity—Can
We Employ Social Network Software?
Over 1850 Registered Users From Over 50 Countries
41. The Global Lambda Integrated Facility (GLIF)
Creates MetaComputers on the Scale of Planet Earth
Maxine Brown, Tom DeFanti, Co-Chairs
iGrid 2005
TH E GL OBAL LAMBDA INTEGRATED FACILITY
www.igrid2005.org
September 26-30, 2005
Calit2 @ University of California, San Diego
California Institute for Telecommunications and Information Technology
21 Countries Driving 50 Demonstrations
1 or 10Gbps to Calit2@UCSD Building
Sept 2005--
A Wide Variety of Applications
42. OptIPortals
Are Being Adopted Globally
Osaka U-Japan KISTI-Korea CNIC-China
AIST-Japan
UZurich
NCHC-Taiwan
SARA- Netherlands Brno-Czech Republic
U. Melbourne,
EVL@UIC Calit2@UCSD Calit2@UCI Australia
43. Green
Initiative:
Can Optical
Fiber Replace
Airline Travel
for Continuing
Collaborations
?
Source: Maxine Brown, OptIPuter Project Manager
45. Launch of the 100 Megapixel OzIPortal Over Qvidium
Compressed HD on 1 Gbps CENIC/PW/AARNet Fiber
www.calit2.net/newsroom/release.php?id=1219
46. Victoria Premier and Australian Deputy Prime Minister
Asking Questions
www.calit2.net/newsroom/release.php?id=1219
47. University of Melbourne Vice Chancellor Glyn Davis
in Calit2 Replies to Question from Australia
48. OptIPlanet Collaboratory Persistent Infrastructure
Between Calit2 and U Washington
Photo Credit: Alan Decker Feb. 29, 2008
Ginger
Armbrust’s
Diatoms:
Micrographs,
Chromosomes,
Genetic
Assembly
iHDTV: 1500 Mbits/sec Calit2 to
UW Research Channel Over NLR
UW’s Research Channel
Michael Wellings
49. EVL’s SAGE Global Visualcasting to Europe
September 2007
Gigabit Streams
Image Viewing Image Viewing
Image Image Image Image
Source Replication Viewing Viewing
OptIPortals at OptIPortal at
EVL Russian
OptIPuter OptIPuter OptIPortal OptIPortal at
Chicago Academy of
servers at SAGE- at SARA Masaryk Sciences
CALIT2 Bridge at Amsterdam University Moscow
San Diego StarLight Brno Oct 1
Chicago
Source: Luc Renambot, EVL
50. Telepresence Meeting
Using Digital Cinema 4k Streams
4k = 4000x2000 Pixels = 4xHD Streaming 4k
with JPEG 2000
Compression
½ Gbit/sec
Lays
Technical
Basis for
Global
Keio University Digital
President Anzai Cinema
Sony
UCSD NTT
Chancellor Fox SGI
Calit2@UCSD Auditorium
51. 3D OptIPortals: Calit2 StarCAVE and Varrier:
Enables Exploration of Virtual Worlds
Connected at 20 Gb/s to CENIC, NLR, GLIF 15 Meyer Sound
Speakers +
Subwoofer
30 HD
Projectors!
Passive Polarization--
Optimized the
Polarization Separation
and Minimized Attenuation Source: Tom DeFanti, Greg Dawe, Calit2
Cluster with 30 Nvidia 5600 cards-60 GB Texture Memory