cloud computing, Cyberdefense and Information Assurance, tech

Amazon’s EC2 outage may be related to cyber-attack

At the moment, I’m waiting for some sort of confirmation. But this is what I know:

Since Monday, Change.org — a site that hosts petitions and other social action efforts for others–has been the subject of a DDOS attack from China, according to Ben Rattray,  Change.org‘s founder.  They’ve been working with their hosting company and with cyber experts to help screen out the attack as much as possible, but the site was down much of yesterday.  And it’s down today, intermittently.

Interesting fact: Change.org is hosted on Amazon Web Services.

Interesting fact: AWS’ Elastic Compute Cloud data center in Northern Virginia is experiencing an outage of various services, affecting Quora, HootSuite, and other social media companies hosted on it.  That would be the same site that Change.org is hosted at primarily, since the NoVA data center is the US East region cloud.

The Chinese have been varying their attack.  Is it possible they’ve exploited Amazon’s EC2 APIs to attack now?

I haven’t heard back from Amazon.

Standard
cloud computing, Enterprise IT, NASA, sticky, tech

Chris Kemp Quits, as Fed Budget and Inertia Beat Govtrepeneurs Down

Chris Kemp, who had a few short weeks ago been greeted with rockstar fervor at the Cloud/Gov conference in Washington, DC, has stepped down from his role as NASA’s Chief Technology Officer for Information Technology.  Kemp was the champion of NASA’s Nebula program, the agency’s private cloud effort , and helped with the General Services Administration’s launch of the Apps.gov cloud service program. But in the face of budget cuts and continued institutional resistance to his agenda for changing government IT, Kemp submitted his resignation in March.

“Whereas I thought I had the best of both worlds being a Headquarters employee stationed in Silicon Valley,” Kemp said in a blog post announcing his move, “I actually had the worst of both worlds… no influence when I can’t be in all of those meetings at NASA HQ, with no mandate to manage projects at Ames. As budgets kept getting cut and continuing resolutions from Congress continued to make funding unavailable, I saw my vision for the future slowly slip further from my grasp.”

Kemp’s dillema, while certainly higher profile than that of many state and local CIOs and CTOs, is hardly unique.  With revenues at historic lows, and budgets tight, it’s perhaps harder than ever to try to achieve meaningful change in the way agencies run their information technology, even at tech-focused agencies like NASA.  At the federal level, the budget standoff threatens to put major initiatives that could actually save the government more money on hold.

But perhaps more dangerous, the uncertainties around IT budgets and programs at all levels of government can be demoralizing, particularly to the most talented and valuable members of IT organizations who have options elsewhere.  As other employment opportunities emerge, government IT organizations could see an exodus of talent, making it even more difficult to do more with less.

 

Standard
cloud computing, Cyberdefense and Information Assurance, sticky

State, Local Agencies Should Examine NISTs Public Cloud Guidelines

(This post was originally published on the Virtual Integrated System Blog )

As I mentioned in a recent post, the National Institute of Standards and Technology recently published a document outlining the risks of cloud computing and offering policies and procedures to help reduce those risks. While the guidelines aren’t official federal policy yet, they are a good starting point for agencies at any level of government thinking about using public clouds as a part of their cost-cutting and consolidation of IT services.

The core guidelines of the NIST document come down to four main steps in preparing for a public cloud solution:

  1. “Carefully plan the security and privacy aspects of cloud computing solutions before engaging them.” Before even looking at cloud solutions, an organization should fully understand the privacy and security requirements of the data that will be handled. Not doing due diligence on all of the potential privacy and security issues in advance can lead to roadblocks later–or worse, major breaches in security and exposure of citizens’ private data. The City of Los Angeles was caught by surprise when it found its cloud solution wasn’t in alignment with federal data protection regulations for public safety data, for example.
  2. “Understand the public cloud computing environment offered by the cloud provider and ensure that a cloud computing solution satisfies organizational security and privacy requirements.” Most public cloud services–be they infrastructure-as-a-service, platform-as-a-service, or software-as-a-service–were not built with public sector regulatory requirements in mind. Agencies need to do an analysis of the gaps between what cloud providers offer and what their own privacy and security demands require–and then determine whether the cost of getting that sort of solution from a cloud provider makes going forward with a project financially feasible.
  3. Ensure that the client-side computing environment meets organizational security and privacy requirements for cloud computing.” Just because the application and data are secure at the back end in the provider’s cloud doesn’t ensure the overall security of the solution. It’s easy to overlook the client side, which can create a number of potential security problems–especially if SaaS applications include support for mobile devices. It’s important to consider issues like how to lock down smartphones and other mobile devices, preventing them from accessing internal resources through cached credentials, for example, if they’re lost or stolen. And there’s also the issue of how the public cloud service will integrate with identity management and established authentication standards already being used in the organization.
  4. “Maintain accountability over the privacy and security of data and applications implemented and deployed in public cloud computing environments.” Outsourcing the infrastructure doesn’t mean an organization is outsourcing responsibility. Public clouds should be handled like any other managed service or outsourcing arrangement–agencies need to ensure that security and privacy practices are applied consistently and appropriately in the cloud just as they are to internal IT resources. That means agencies should have visibility into the operation of the cloud service, including the ability to monitor the security of the cloud assets and continually assess how well security and privacy standards and practices are implemented within the cloud infrastructure.

 

At the end of the day, after assessing how well public cloud providers can handle the requirements of government applications, agencies may find that much of what they thought could be moved to a public cloud environment is better suited to a private cloud service.

Standard
cloud computing, Cyberdefense and Information Assurance

What an Internet “Kill Switch” Would Mean to the Public Cloud

In the wake of the events in Egypt in early February–and the cut-off of Internet access by the Egyptian government in response to protests coordinated partially by social media–the U.S. Senate took up legislation that would give the President the ability to exert emergency powers over Internet traffic in the event of cyber attack or some other sort of nationwide cyber threat.

While senators deny that any legislation will include a “kill switch” measure–allowing the President to shut down the public Internet in case of an emergency–just the discussion of such a capability has sent waves of concern through the Internet community, and it has raised major concerns about what the impact of legislation could be on public cloud providers.

David Linthicum, CTO and founder of Blue Mountain Labs, recently wrote an article about how just the idea of a “kill switch” is already hurting cloud providers. The reason: organizations are reluctant to invest in cloud computing as a solution, because they are concerned about the possibility of their connection to data being “pulled from (them) at any time.”

But it doesn’t take an Internet “kill switch” to make that happen. A denial-of-service attack or other degradation of the network through overt hostile acts, natural disaster, or any of a number of other events that could affect public Internet bandwidth, could disconnect organizations from the public cloud without warning, if there aren’t proper provisions made for alternate connections.

Read the rest of this post at : Virtual Integrated System Blog – Government – What an Internet “Kill Switch” Would Mean to the Public Cloud.

Standard
cloud computing, sticky

McNealy’s Monday Morning Quarterbacking on Solaris and Linux … shows he still doesn’t get it.

Scott “Privacy Is Dead” McNealy told an audience at an event in Silicon Valley that Sun could have won out over Linux if the company had consistently pushed forward Solaris xI86 instead of pussy-footing around.  “Google today would be running on Solaris,” he said.

Um, no.

Solaris was, and is, a great operating system, to be sure. But Linux did not succeed because of Sun’s failure to commit to Intel.  Linux succeeded because of the open-source model, and the ability of IT people all over the world to try it without license restrictions.

If Sun had open-sourced Solaris early, Sun may very well have taken a dent out of Linux’s success. But that’s a big if.  And considering how much internal wrangling, legal finagling and patent-exchanging had to be done to get Solaris open-sourced in the timeframe that it did, even with the somewhat restrictive terms of Sun’s custom-rolled open-source license even though it was a license that split Solaris off to some degree from other open-source communities , it’s doubtful that McNealy would have pulled it off. It wasn’t until 2005 that Sun cleared the legal hurdles to open-source Solaris.

There are so many other “woulda, shoulda, coulda” moments in Sun’s history. McNealy should be acknowledged for his early recognition of the coming of cloud computing — “application dial-tone”, he referred to it as.  But  Sun had multiple opportunities to redefine the market with open-source early, both with Java and Solaris.   The company’s toe-dips with its investments in OpenOffice (via its acquisition of StarOffice), Gnome, mySQL and other open-source projects came after Linux had already become a major threat. And honestly, Sun did those things to put a thumb in Microsoft’s eye.

So, McNealy can look back and replay the game all he wants. But it won’t change the fact that Sun was caught up in Sparc , and failed to leverage Solaris and Java to transition the company toward being an open-source driven software services company that also sells hardware.  And that’s why Larry Ellison owns Sun now.

Standard
cloud computing, NASA, sticky, tech, virtualization

NASA’s Chris Kemp calls OpenStack the “Linux of Cloud”, and predicts a public cloud future.

Chris Kemp, NASA’s CTO for IT, closed out yesterday’s Cloud/Gov conference in DC with a discussion of Nebula, NASA’s open-source cloud-in-a-shipping-container, and the impact it has had on the agency. Kemp was greeted with the most enthusiasm from the audience that any of the speakers got, including whoops from some of the government employees and vendors in the audience, and for good reason: Nebula has become the gravitational center of cloud standards efforts within and outside the government.

“While (the National Institute of Standards and Technology) is talking about standards, there are defacto standards that are evolving right now,” Kemp said. And Nebula, he said, “is a reference implementation of what NIST is doing.”

The Nebula project’s code has become the core of the OpenStack initiative, the open-source cloud infrastructure software project, and now is maintained by a community that includes Intel, AMD, Dell, Rackspace, and an army of other technology companies. “There are over 1000 developers that have submitted features and bug fixes,” Kemp said, “and over 100 companies.  If you’re interested in doing a cloud, you can download OpenStack today.  It’s the Linux of the cloud–it gives you an environment you can actually develop on and meet a requirement, and build your environment on, on a platform that’s compatible with everything in the industry.”

Kemp said that he believed that the public cloud could be as secure as private clouds, but that private clouds were a “necessary stepping stone” to the day when NASA didn’t have to be in the IT business, to demonstrate that cloud environments could be completely secure.  And by moving to a private cloud, agencies were doing the majority of the work required to get them to the point where they can move to a public cloud infrastructure.

“Once you virtualize an application, you’re more than halfway there,” Kemp said.  “Every agency that builds a private cloud takes us 90% of the way to where we’ll be able to put everything in the public cloud.”

Still, Kemp said, it will be decades before agencies are able to make that jump completely. “We’ve only scratched the surface of this.  We still have mainframe systems running that were coded in the ’70’s. They’re systems we just haven’t taken the time to make run in Oracle or SQL Server .  Moving something to cloud is a thousand times bigger a challenge.”  The only apps that have been written to take advantage of the features of cloud so far are apps that were written for the cloud to begin with, such as Google’s apps, and Zynga’s game platforms.

Kemp emphasized that cloud infrastructure and data center consolidation were not synonymous.  “One thing that I hope happens is that you treat data center consolidation and cloud as separate things. If you’re virtualizing existing applications, you need the support of commercial systems. But if you’re doing really pioneering development, and can’t use Amazon, then you need something like (Nebula).”

Standard