Print

Print


http://www.businessweek.com/ap/financialnews/D8OFRMNG0.htm

The Associated Press  April 13, 2007, 1:17PM EST

Researchers explore scrapping Internet

By ANICK JESDANUN

NEW YORK

Although it has already taken nearly four decades to get this far in 
building the Internet, some university researchers with the federal 
government's blessing want to scrap all that and start over.

The idea may seem unthinkable, even absurd, but many believe a "clean 
slate" approach is the only way to truly address security, mobility 
and other challenges that have cropped up since UCLA professor 
Leonard Kleinrock helped supervise the first exchange of meaningless 
test data between two machines on Sept. 2, 1969.

The Internet "works well in many situations but was designed for 
completely different assumptions," said Dipankar Raychaudhuri, a 
Rutgers University professor overseeing three clean-slate projects. 
"It's sort of a miracle that it continues to work well today."

No longer constrained by slow connections and computer processors and 
high costs for storage, researchers say the time has come to rethink 
the Internet's underlying architecture, a move that could mean 
replacing networking equipment and rewriting software on computers to 
better channel future traffic over the existing pipes.

Even Vinton Cerf, one of the Internet's founding fathers as 
co-developer of the key communications techniques, said the exercise 
was "generally healthy" because the current technology "does not 
satisfy all needs."

One challenge in any reconstruction, though, will be balancing the 
interests of various constituencies. The first time around, 
researchers were able to toil away in their labs quietly. Industry is 
playing a bigger role this time, and law enforcement is bound to make 
its needs for wiretapping known.

There's no evidence they are meddling yet, but once any research 
looks promising, "a number of people (will) want to be in the drawing 
room," said Jonathan Zittrain, a law professor affiliated with Oxford 
and Harvard universities. "They'll be wearing coats and ties and 
spilling out of the venue."

The National Science Foundation wants to build an experimental 
research network known as the Global Environment for Network 
Innovations, or GENI, and is funding several projects at universities 
and elsewhere through Future Internet Network Design, or FIND.

Rutgers, Stanford, Princeton, Carnegie Mellon and the Massachusetts 
Institute of Technology are among the universities pursuing 
individual projects. Other government agencies, including the Defense 
Department, have also been exploring the concept.

The European Union has also backed research on such initiatives, 
through a program known as Future Internet Research and 
Experimentation, or FIRE. Government officials and researchers met 
last month in Zurich to discuss early findings and goals.

A new network could run parallel with the current Internet and 
eventually replace it, or perhaps aspects of the research could go 
into a major overhaul of the existing architecture.

These clean-slate efforts are still in their early stages, though, 
and aren't expected to bear fruit for another 10 or 15 years -- 
assuming Congress comes through with funding.

Guru Parulkar, who will become executive director of Stanford's 
initiative after heading NSF's clean-slate programs, estimated that 
GENI alone could cost $350 million, while government, university and 
industry spending on the individual projects could collectively reach 
$300 million. Spending so far has been in the tens of millions of 
dollars.

And it could take billions of dollars to replace all the software and 
hardware deep in the legacy systems.

Clean-slate advocates say the cozy world of researchers in the 1970s 
and 1980s doesn't necessarily mesh with the realities and needs of 
the commercial Internet.

"The network is now mission critical for too many people, when in the 
(early days) it was just experimental," Zittrain said.

The Internet's early architects built the system on the principle of 
trust. Researchers largely knew one another, so they kept the shared 
network open and flexible -- qualities that proved key to its rapid 
growth.

But spammers and hackers arrived as the network expanded and could 
roam freely because the Internet doesn't have built-in mechanisms for 
knowing with certainty who sent what.

The network's designers also assumed that computers are in fixed 
locations and always connected. That's no longer the case with the 
proliferation of laptops, personal digital assistants and other 
mobile devices, all hopping from one wireless access point to 
another, losing their signals here and there.

Engineers tacked on improvements to support mobility and improved 
security, but researchers say all that adds complexity, reduces 
performance and, in the case of security, amounts at most to bandages 
in a high-stakes game of cat and mouse.

Workarounds for mobile devices "can work quite well if a small 
fraction of the traffic is of that type," but could overwhelm 
computer processors and create security holes when 90 percent or more 
of the traffic is mobile, said Nick McKeown, co-director of 
Stanford's clean-slate program.

The Internet will continue to face new challenges as applications 
require guaranteed transmissions -- not the "best effort" approach 
that works better for e-mail and other tasks with less time 
sensitivity.

Think of a doctor using teleconferencing to perform a surgery 
remotely, or a customer of an Internet-based phone service needing to 
make an emergency call. In such cases, even small delays in relaying 
data can be deadly.

And one day, sensors of all sorts will likely be Internet capable.

Rather than create workarounds each time, clean-slate researchers 
want to redesign the system to easily accommodate any future 
technologies, said Larry Peterson, chairman of computer science at 
Princeton and head of the planning group for the NSF's GENI.

Even if the original designers had the benefit of hindsight, they 
might not have been able to incorporate these features from the 
get-go. Computers, for instance, were much slower then, possibly too 
weak for the computations needed for robust authentication.

"We made decisions based on a very different technical landscape," 
said Bruce Davie, a fellow with network-equipment maker Cisco Systems 
Inc., which stands to gain from selling new products and 
incorporating research findings into its existing line.

"Now, we have the ability to do all sorts of things at very high 
speeds," he said. "Why don't we start thinking about how we take 
advantage of those things and not be constrained by the current 
legacy we have?"

Of course, a key question is how to make any transition -- and 
researchers are largely punting for now.

"Let's try to define where we think we should end up, what we think 
the Internet should look like in 15 years' time, and only then would 
we decide the path," McKeown said. "We acknowledge it's going to be 
really hard but I think it will be a mistake to be deterred by that."

Kleinrock, the Internet pioneer at UCLA, questioned the need for a 
transition at all, but said such efforts are useful for their 
out-of-the-box thinking.

"A thing called GENI will almost surely not become the Internet, but 
pieces of it might fold into the Internet as it advances," he said.

Think evolution, not revolution.

Princeton already runs a smaller experimental network called 
PlanetLab, while Carnegie Mellon has a clean-slate project called 100 
x 100.

These days, Carnegie Mellon professor Hui Zhang said he no longer 
feels like "the outcast of the community" as a champion of 
clean-slate designs.

Construction on GENI could start by 2010 and take about five years to 
complete. Once operational, it should have a decade-long lifespan.

FIND, meanwhile, funded about two dozen projects last year and is 
evaluating a second round of grants for research that could 
ultimately be tested on GENI.

These go beyond projects like Internet2 and National LambdaRail, both 
of which focus on next-generation needs for speed.

Any redesign may incorporate mechanisms, known as virtualization, for 
multiple networks to operate over the same pipes, making further 
transitions much easier. Also possible are new structures for data 
packets and a replacement of Cerf's TCP/IP communications protocols.

"Almost every assumption going into the current design of the 
Internet is open to reconsideration and challenge," said Parulkar, 
the NSF official heading to Stanford. "Researchers may come up with 
wild ideas and very innovative ideas that may not have a lot to do 
with the current Internet."

------

Associated Press Business Writer Aoife White in Brussels, Belgium, 
contributed to this report.

------

On the Net:

Stanford program: http://cleanslate.stanford.edu

Carnegie Mellon program: http://100x100network.org

Rutgers program: http://orbit-lab.org

NSF's GENI: http://geni.net