U.S. Department of Defense Creating a Public Nuisance?

By Jane Genova, We’re the government and we’re here to help.  When government first says that, it probably is true.  Take the U.S. Department of Defense (DOD).  

Decades ago, DOD became involved with research and development of high performance computers (HPC) as a must-do in national security.  Those HPC applications support aeronautics, cryptography, and nuclear weapons design and testing.  Then, in the mid 1980s, DOD leveraged that expertise for U.S. economic competitive strength, including stimulating productivity and innovation in the private sector, ranging from manufacturing to energy [http://www.stormingmedia.us/15/15559/A155994.html].   

That was then.  Today, HPC is a stand-alone industry.  It’s business-as-usual in the financial markets, biological sciences, geo-sciences, and engineering.  Last year, despite the downturn, the high end grew by 25% to $3.4 billion, according to WORLDWIDE HIGH-PERFORMANCE TECHNICAL SERVER.

Yet, an irony tragically typical of government, DOD seems to be the source of problems in the HPC industry.  Those problems are of two types.  Together they could be impairing or harming the common good in such a way as to be considered legally and in terms of public policy a “public nuisance.”  Since 2005, I have been covering how the traditional theory of public nuisance is being applied to product liability, the environment and energy.  Recently, in “North Carolina v. TVA,” the plaintiff used the concept of public nuisance and won.    

One of the two problems is the lack of disclosure, which could be masking a conflict of interest.  That is the situation in the DOD publication of InsideHPC.com.  That online site is headed by DOD full-time employee John E. West.  On the “About Us” page, West identifies himself as employed “in a computing strategy role in R&D in the public sector.” Why isn’t DOD explicitly identified? A source informs me that West “works with Lockheed Martin as the systems integrator at the U.S. Army site.”  John Leidel, also involved with InsideHPC.com, the same source tells me “works for Convey Computer.”

When I presented this information to Christopher M. O’Neal, Chief Executive Officer of SuperComputerOnline.com, he noted, “FTC Disclosure Policies are designed to allow online communicators to self-identify any affiliation that may influence the content of the blog and allow readers to make their own judgment regarding the influence on content.”  Variations of disclosure rules regarding online content are working their way through state and federal courts.  At stake in those rulings is the credibility of digital as a medium as well as its commercial future.

Some might shrug “InsideHPC is only one of those dry government publications.”  But, Inside HPC.com is hardly that.  Actually it is slickly commercial.  In fact, it gushes on its website, “We aren’t just another ‘me too’ HPC news site, and we aren’t interested in letting ‘me too’ HPC companies reach our readers.”  Yet, this no me-too is paid for with tax-payer money.  West’s trips to HPC conferences are funded by tax-payers.  

Simultaneously, there are now private-sector HPC publications, ranging from SuperComputerOnline.com to HPCWire.com, which play in that same sandbox.    Only to play, all their expenses, including for trips to conferences, must come from their own pocket.  Those costs represent a minus from revenue.

That leads right in to the second argument supporting that DOD might be creating or contributing to a public nuisance in permitting InsideHPC.com to exist in its present form.  That second issue is: Why is this unfair competition with the private sector being permitted?  

Anyone who understands perception knows this: The DOD publication, being issued by the government, implies a type of official imprimatur.  In media we call that the “halo effect.”  That could position it, among private-sector publications, as unique in its authority and credibility.  Clearly, that is an unfair advantage it has over other HPC publications that should not exist.  

That brings up the core issue: Why would any government agency set itself up as a competitor with business?  Doesn’t that bring up the very question of what is the mission and function of government?

When government intrudes in this way, at best this creates redundancy.  At worst it is taking on private enterprise – and with built-in cost and influence advantages.  At the top of the list in not having to factor in many kinds of expenses which enterprises have to.  

There’s more.  InsideHPC now has a dedicated marketing and sales arm, including Mike Bernhardt, to sell ads.  Right now most of advertising is a zero-sum game.  What space InsideHPC sells, the private sector likely doesn’t.  Yet, most publications depend on advertising for profits.  

The Business Coalition for Fair Competition provides a white paper on what’s very dangerous with both these best and worst case scenarios [http:governmentcompetition.org/howgovtcompetes.html].  Isn’t this the other side of the coin of government agencies being too cozy with businesses? Unfortunately, this side too often remains invisible, due to lack of transparency.

U.S. government can be most helpful when it discerns its mission has been accomplished and it surrenders that function.   If it refuses to do just that, then it can be viewed in the legal and public policy light of creating a public nuisance.

Jane Genova blogged the Rhode Island lead paint public nuisance trial and its aftermath, beginning on syndicated site http://janegenova.com under “legal” and continuing on syndicated site http://lawandmore.typepad.com.  Then she expanded analysis of public nuisance to environmental and energy matters.  She has been interviewed on legal and policy issues by THE NEW YORK TIMES, CRAIN’S BUSINESS and PLAIN DEALER. Her posts are regularly linked to by THE WALL STREET JOURNAL, LEGAL TECHNOLOGY, PUBLIC NUISANCE, and NEW YORK Magazine.

Turbulence responsible for black holes' balancing act

New simulations reveal that turbulence created by jets of material ejected from the disks of the Universe’s largest black holes is responsible for halting star formation. Evan Scannapieco, an assistant professor in the School of Earth and Space Exploration in the College of Liberal Arts and Sciences at Arizona State University (ASU) and Professor Marcus Brueggen of Jacobs University in Bremen, Germany, present the new model in a paper in the journal Monthly Notices of the Royal Astronomical Society.
 
We live in a hierarchical Universe where small structures join into larger ones. Earth is a planet in our Solar System, the Solar System resides in the Milky Way Galaxy, and galaxies combine into groups and clusters. Clusters are the largest structures in the Universe, but sadly our knowledge of them is not proportional to their size. Researchers have long known that the gas in the centres of some galaxy clusters is rapidly cooling and condensing, but were puzzled why this condensed gas did not form into stars. Until recently, no model existed that successfully explained how this was possible.
 
Professor Scannapieco has spent much of his career studying the evolution of galaxies and clusters. “There are two types of clusters: cool-core clusters and non-cool core clusters,” he explains. “Non-cool core clusters haven’t been around long enough to cool, whereas cool-core clusters are rapidly cooling, although by our standards they are still very hot.”
 
X-ray telescopes have revolutionized our understanding of the activity occurring within cool-core clusters. Although these clusters can contain hundreds or even thousands of galaxies, they are mostly made up of a diffuse, but very hot gas known as the intracluster medium. This intergalactic gas is only visible to X-ray telescopes, which are able to map out its temperature and structure. These observations show that the diffuse gas is rapidly cooling into the centres of cool-core clusters.
 
At the core of each of these clusters is a black hole, billions of times more massive than the Sun. Some of the cooling medium makes its way down to a dense disk surrounding this black hole, some of it goes into the black hole itself, and some of it is shot outward. X-ray images clearly show jet-like bursts of ejected material, which occur in regular cycles.
 
But why were these outbursts so regular, and why did the cooling gas never drop to colder temperatures that lead to the formation of stars? Some unknown mechanism was creating an impressive balancing act.
 
“It looked like the jets coming from black holes were somehow responsible for stopping the cooling,” says Scannapieco, “but until now no one was able to determine how exactly.”
 
Scannapieco and Brueggen used the enormous supercomputers at ASU to develop their own three-dimensional simulation of the galaxy cluster surrounding one of the Universe’s biggest black holes. By adapting an approach developed by Guy Dimonte at Los Alamos National Laboratory and Robert Tipton at Lawrence Livermore National Laboratory, Scannapieco and Brueggen added the component of turbulence to the simulations, which was never accounted for in the past.
 
And that was the key ingredient.
 
Turbulence works in partnership with the black hole to maintain the balance. Without the turbulence, the jets coming from around the black hole would grow stronger and stronger, and the gas would cool catastrophically into a swarm of new stars. When turbulence is accounted for, the black hole not only balances the cooling, but goes through regular cycles of activity.
 
“When you have turbulent flow, you have random motions on all scales,” explains Scannapieco. “Each jet of material ejected from the disk creates turbulence that mixes everything together.”
 
Scannapieco and Brueggen’s results reveal that turbulence acts to effectively mix the heated region with its surroundings so that the cool gas can’t make it down to the black hole, thus preventing star formation.
 
Every time some cool gas reaches the black hole, it is shot out in a jet. This generates turbulence that mixes the hot gas with the cold gas. This mixture becomes so hot that it doesn’t accrete onto the black hole. The jet stops and there is nothing to drive the turbulence so it fades away. At that point, the hot gas no longer mixes with the cold gas, so the centre of the cluster cools, and more gas makes its way down to the black hole.
 
Before long, another jet forms and the gas is once again mixed together.
 
“We improved our simulations so that they could capture those tiny turbulent motions,” explains Scannapieco. “Even though we can’t see them, we can estimate what they would do. The time it takes for the turbulence to decay away is exactly the same amount of time observed between the outbursts.”
Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. (Photos by Olivia Bartlett Drake)
Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. (Photos by Olivia Bartlett Drake)

Blue Sky Studios Donates Animation SuperComputer to Wesleyan

Next fall, Wesleyan students and faculty will perform research activities on the same state-of-the-art animation computers that produced Ice Age the Meltdown, a $652 million worldwide box office hit.

The computer hardware was donated July 2 by Greenwich, Conn.-based Blue Sky Studios, the creator of a number of award-winning digital animation features, including the Ice Age series and Dr. Seuss’ Horton Hears a Who, which took in nearly $300 million worldwide.

In 2008, Blue Sky Studios refreshed their technology for their latest movie, Ice Age: Dawn of the Dinosaurs, and bought racks of new computers.

“The old computer racks still had a lot of life left in them, so we went looking for large colleges and universities in Connecticut that might be able to make use of this kind of computing infrastructure, and to which we might donate these computers,” explains Andrew Siegel, head of systems at Blue Sky Studios. “Wesleyan seemed like a good candidate.”

Blue Sky arranged for the racks to be delivered to the Exley Science Center loading dock. They are now housed on the fifth floor of Information Technology Services.

“We requested two, but they graciously gave us four,” Ganesan “Ravi” Ravishanker, associate vice president for Information Technology Services.

Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. Combined, Blue Sky donated about 3.7 terabytes of total memory.

“This is just phenomenal,” says Henk Meij, senior consultant and manager of Unix Systems Group. “Once it’s in full operation, it’s going to be much appreciated by the researchers. They’re definitely going to notice a difference in how fast research can be done.” 

One rack will be devoted to supporting high performance computing at Wesleyan. The current system allows up to 300 “jobs” to run at once. An additional 100 jobs will be able to run with the new rack, and at higher processing speeds.

“If a graduate student in astronomy wants to calculate planet rotations in a section of the galaxy, he or she will be able to do this much faster,” Meij says.

Another rack will be used primarily by ITS in a pilot project of building a virtualized data center using an entire rack. Services such as blogs, wikis, web servers and similar applications could be hosted in such an environment. When a hardware failure occurs, or one server experiences heavy loads, the virtualization layer would migrate the services to healthy servers automatically in the virtualized environment.

The additional two racks will be used to replace any hardware that fails in production system. Wesleyan would need additional cooling systems to run all four racks at once.

The high-speed animation computers feature 104 Centralized Processing Units (CPU) per rack. Each rack has a current market value of approximately $35,000. The University of Connecticut’s drama and computer science engineering departments also are each receiving two racks.

Blue Sky, a wholly owned subsidiary of Fox Filmed Entertainment, relocated to Connecticut from New York in January, bringing with it more than 300 jobs. The company, which continues to expand, said it was attracted to Connecticut because of the state’s efforts to promote the film industry.

“This is a tremendous gift for our students and for our state,” Governor Jodi Rell said in a statement. “The film industry has clearly found a home in Connecticut and we are grateful for Blue Sky’s commitment to Connecticut and partnership in helping us develop the next generation of skilled, educated industry professionals. This generous donation comes at a time when resources for so many worthwhile programs are stretched thin.”

Complex Concepts That Really Add Up

By Leyla Ezdinli -- An annual outreach program run by USC’s Collaboratory for Advanced Computing and Simulations is helping shape the future of computational science by encouraging members of underrepresented groups to pursue graduate work and research in scientific computing.

For the past eight years, the Computational Science Workshop for Underrepresented Groups has offered participants an opportunity to learn about complex research concepts in a hands-on and interdisciplinary environment. USC graduate student Amy Yuan and Roderick Brown, a participant at the Computational Science Workshop for Underrepresented Groups

The majority of participants are students and faculty members from small historically black colleges and universities with limited resources for research computing and curriculum development. For students, the workshop can have a profound influence on their choice of majors and careers. For faculty, the workshop offers the resources necessary to develop new courses and advance their research.

“The goal of the workshop is, in one week, to break the participants’ fear of computing and their ideas of parallel computing,” said Priya Vashishta, professor of materials science at the USC Viterbi School of Engineering, professor of physics at USC College and director of the Collaboratory for Advanced Computing and Simulations.

“We do not ask that people know about computing before they arrive — all we ask is that they have a good head on their shoulders,” he said.

Teaching parallel computing in a way that is comprehensible to those without a solid foundation in computer science and advanced mathematics is no small feat.

Parallel computing is a sophisticated form of computation in which a complex problem is divided into smaller problems that are then distributed to a cluster of networked computers for simultaneous processing. Parallel computing allows researchers to solve problems involving extremely large data sets much faster than would serial computation, in which operations are performed in a linear manner.

The workshop takes a novel and ambitious approach to teaching parallel computing. On the first day, each student assembles his or her own computer from components and installs the Linux operating system. All the computers in the workshop are then networked together to form a cluster.

Over the course of the week, students learn to write and compile code, write parallel codes, run programs on the cluster and analyze the cluster’s performance metrics.

The workshop was designed and developed by Rajiv Kalia, Aiichiro Nakano and Vashishta, the founders of the collaboratory, all of whom hold joint appointments in USC’s departments of physics and astronomy, chemical engineering and materials science, and computer science.

“This is an extremely intense workshop,” Nakano said. “It was Priya’s vision to have students build a supercomputer cluster from personal computers. He trained all of us,” said Nakano, referring to the collaboratory faculty and graduate students who organize and teach the workshop each year.


 

TeraGrid ’09 'Call for Participation'

TeraGrid'09

June 22-26, 2009

Hyatt Regency Crystal City

Arlington, Virginia

 

The TeraGrid ’09 conference will showcase the capabilities and impact of the TeraGrid in research and education. All interested individuals are invited to participate. Submissions are sought for the science and technology presentation tracks, poster session, visualization showcase, and tutorials. 

 

SCIENCE TRACK

Submissions should demonstrate the impact of the TeraGrid through scientific results or the emergence of new communities. Submissions should: articulate the scientific problem; describe the scientific and computational methods and TeraGrid resources used; and present results, impact of the TeraGrid, and future plans. Work previously published in another venue or presented at another conference may be submitted for consideration. Accepted submissions will be included in the conference as 30-minute presentations.

 

For full submission details see: http://www.teragrid.org/tg09/participation

Questions? Contact Science Track Co-Chairs Shawn T. Brown (stbrown@psc.edu) or Jay Alameda (jalameda@ncsa.uiuc.edu). 

 

Science Track Dates

Submission site opens: Feb. 11

Science track abstracts due: March 20

Notification of acceptance: April 24

Final abstracts due for online publication: May 22

 

TECHNOLOGY TRACK

Submissions should present technology developments and capabilities that enable increased performance, productivity, and/or reliability of TeraGrid users, applications, and resources. Submissions should describe the technology in detail, discuss achieved or potential impact, and articulate future plans. Submissions must describe new, previously unpublished work. Accepted submissions will be included in the conference as 30-minute presentations.

 

For full submission details see: http://www.teragrid.org/tg09/participation/

Questions? Contact Technology Track Co-Chairs Chris Jordon (ctjordan@tacc.utexas.edu) or Tom Scavo (tscavo@ncsa.uiuc.edu). 

 

Technology Track Dates

Submission site opens: Feb. 11

Tech track papers due: March 20

Notification of acceptance: April 24

Final papers due for online publication: May 22

 

POSTERS

Posters should present new results or promising work in progress dealing with the use of the TeraGrid for scientific research and/or the development of new technologies for scientific computing. Accepted submissions will be included in the poster session, when at least one contributor to the project is expected to be present. 

 

For full submission details see: http://www.teragrid.org/tg09/participation/

Questions? Contact Posters Co-Chairs Daniel S. Katz (d.katz@ieee.org) or Shantenu Jha (sjha@cct.lsu.edu). 

 

Poster Dates

Submission site opens: Feb. 11

Poster abstracts due: May 1

Notification of acceptance: May 15

Final poster abstracts due for online publication: May 22

 

VISUALIZATION SHOWCASE

The Visualization Showcase provides a digital gallery of the powerful, evocative imagery associated with the TeraGrid's most exciting and compelling results. Submissions should have used TeraGrid resources to generate data, to produce the visualization, or both, and should be the result of work accomplished within the past year.  Accepted submissions will be displayed in the Visualization Showcase during the conference. 

 

For full submission details see: http://www.teragrid.org/tg09/participation/

Questions? Contact Visualization Showcase Co-Chairs Joseph Insley (insley@mcs.anl.gov) or Kelly Gaither (kelly@tacc.utexas.edu).

 

Visualization Showcase Dates

Submission site opens: Feb. 11

Visualization showcase abstracts due: April 24

Notification of acceptance: May 1

Final visualization abstracts due for online publication: May 22

 

TUTORIALS

Tutorials will provide in-depth training to effectively use TeraGrid resources and services. Tutorial proposals should specify: topic/title of the tutorial; proposed agenda; names and affiliations of all instructors; software requirements; any prerequisites; whether the tutorial is a half or full day; and whether the material is introductory, intermediate, or advanced. Preference will be given to hands-on activities.

 

For full submission details see: http://www.teragrid.org/tg09/participation/

Questions? Contact Tutorials Co-Chairs Scott Lathrop (scott@ncsa.uiuc.edu) or Sandie Kappes (skappes@ncsa.uiuc.edu).

 

Tutorial Dates

Submission site opens: Feb. 11

Tutorial proposals due: March 20

Notification of acceptance: April 24

Final tutorial materials due: May 22