This is the audio of the tech support scammer call I tweeted about, and which will be an upcoming Ars article—posted here for science purposes.
When Libyan rebels finally wrested control of the country last year away from its mercurial dictator, they discovered the Qaddafi regime had received an unusual gift from its allies: foreign firms had supplied technology that allowed security forces to track nearly all of the online activities of the country’s 100,000 Internet users. That technology, supplied by a subsidiary of the French IT firm Bull, used a technique called deep packet inspection (DPI) to capture e-mails, chat messages, and Web visits of Libyan citizens.
The fact that the Qaddafi regime was using deep packet inspection technology wasn’t surprising. Many governments have invested heavily in packet inspection and related technologies, which allow them to build a picture of what passes through their networks and what comes in from beyond their borders. The tools secure networks from attack—and help keep tabs on citizens.
Narus, a subsidiary of Boeing, supplies “cyber analytics” to a customer base largely made up of government agencies and network carriers. Neil Harrington, the company’s director of product management for cyber analytics, said that his company’s “enterprise” customers—agencies of the US government and large telecommunications companies—are ”more interested in what’s going on inside their networks” for security reasons. But some of Narus’ other customers, like Middle Eastern governments that own their nations’ connections to the global Internet or control the companies that provide them, “are more interested in what people are doing on Facebook and Twitter.”
Surveillance perfected? Not quite, because DPI imposes its own costs. While deep packet inspection systems can be set to watch for specific patterns or triggers within network traffic, each specific condition they watch for requires more computing power—and generates far more data. So much data can be collected that the DPI systems may not be able to process it all in real time, and pulling off mass surveillance has often required nation-state budgets.
Not anymore. Thanks in part to tech developed to power giant Web search engines like Google’s—analytics and storage systems that generally get stuck with the label “big data”—”big surveillance” is now within reach even of organizations like the Olympics.
Network security camera
The tech is already helping organizations fight the ever-rising threat of hacker attacks and malware. The organizers of the London Olympic games, in an effort to prevent hackers and terrorists from using the games’ information technology for their own ends, undertook one of the most sweeping cyber-surveillance efforts ever conducted privately. In addition to the thousands of surveillance cameras that cover London, there was a massive computer security effort in the Games’ Security Operation Centers, with systems monitoring everything from network infrastructure down to point-of-sale systems and electronic door locks.
“Almost everything interesting happening in networking has some DPI embedded in it. What gets people riled up a bit is the ‘inspection’ part, because somehow inspection has negative connotations.”
The logs from those systems generated petabytes of data before the torch was extinguished. They were processed in real-time by a security information and event management (SIEM) system using “big data” analytics to look for patterns that might indicate a threat—and triggering alarms swiftly when such a threat was found.
The combination of the sophisticated analytics and massive data storage in big data systems with DPI network security technology has created what Dr. Elan Amir, CEO of Bivio Networks, calls “a security camera for your network.”
“There’s no question that within the next three to five years, not having a copy of your network data will be as strange as not having a firewall,” Amir told me.
“The danger here,” Electronic Frontier Foundation Technology Projects Director Peter Eckersley told Ars, “is that these technologies, which were initially developed for the purpose of finding malware, will end up being repurposed as commercial surveillance technology. You start out checking for malware, but you end up tracking people.”
Unchecked, Eckersley said, companies or rogue employees of those companies will do just that. And they could retain data indefinitely, creating a whole new level of privacy risk.
How deep packet inspection works
As we send e-mails, search the Web, and post messages and comments to blogs, we leave a digital trail. At each point where Internet communications are received and routed toward their ultimate destination, and at each server they touch, security and systems operations tools give every transactional conversation anything from a passing frisk to the equivalent of a full strip search. It all depends on the tools used and how they’re set up.
One of the key technologies that drives these tools is deep packet inspection. A capability rather than a tool itself, DPI is built into firewalls and other network devices. Deep packet inspection and packet capture technologies revolutionized network surveillance over the last decade by making it possible to grab information from network traffic in real time. DPI makes it possible for companies to put tight limits on what their employees (and, in some cases, customers) can do from within their networks. The technology can also log network traffic that matches rules set up on network security hardware— rules based on the network addresses that the traffic is going to, the type of traffic itself, or even keywords and patterns within its contents.
“Almost everything interesting happening in networking, especially with a slant toward cyber security, has some DPI embedded in it, even if people aren’t calling it that,” said Bivio’s Amir. “It’s a technology and a discipline that captures all of the processing and network activity that’s getting done on network traffic outside of the standard networking elements of packets—the addressing and routing fields. What gets people riled up a bit is the ‘inspection’ part, because somehow inspection has negative connotations.”
To understand how DPI works, you first have to understand how data travels across networks and the Internet. Regardless of whether they’re wired or wireless, Internet-connected networks generally use Internet Protocol (IP) to handle routing data between the computers and devices attached to them. IP sends data in chunks called packets—blocks of data proceeded by handling and addressing information that lets routers and other devices on the network know where the data came from and where it’s going. That addressing information is often referred to in the networking world as Layer 3 data, a reference to its definition within the Open Systems Interconnection network model.
The OSI Layers of an Internet data packet
|Layer 1||Physical||The format for the transmission of data across the networking medium, defining how data gets passed across it. WiFi (802.11) is a physical layer standard.|
|Layer 2||Data link||Within a network segment, handles the physical addressing—the media access control (MAC) addressing of devices on the network and their communication. Ethernet and Point-to-Point Protocol are data link protocols.|
|Layer 3||Network||Handles the logical addressing and routing of data, based on soft-defined addresses. Internet Protocol headers are the Layer 3 data in a packet.|
|Layer 4||Transport||Protocol information, such as in the Transmission Control Protocol (TCP) and the User Datagram Protocol, provides for error-checking and recovery and flow control of data.|
|Layer 5||Session||Handles communications between applications, such as remote procedure calls, inter-process communications like “named pipes,” and TCP secure sockets (SOCKS).|
|Layer 6||Presentation or Syntax||Data formatting, serialization, compression and encryption services, like the Multipurpose Internet Mail Extension (MIME) format.|
|Layer 7||Application||The data sent for specific applications in formats such as HTTP for the request and delivery of Web content, File Transfer Protocol (FTP), IMAP and SMTP mail connections, and other application-specific formats.|
Internet routers generally just look at Layer 3 data to determine which network path a packet gets relayed down to. Network firewalls look a little deeper into the data when making a decision about whether to let packets pass onto the networks they protect. Packet-filtering firewalls typically look at Layer 3 and Layer 4, checking what transport protocol (such as TCP or UDP) and which Internet Protocol port number they use (this is commonly associated with a specific application; port 80, for example, is usually associated with Web services).
Application-layer firewalls, which emerged in the 1990s, look still deeper into network traffic. These set rules for network traffic based on the specific type of application the data within the packet was for. Application firewalls were the first real “deep packet inspection” devices, checking the application protocols within the packets themselves, as well as searching for patterns or keywords in the data they contain.
Traffic cops vs. traffic spies
Where DPI devices sit in the network flow varies based on their purpose. DPI-based “stateful” firewalls briefly delay, or buffer, packets to check the traffic stream as it passes through. Other systems designed for deeper analysis of network content tend to passively collect packet data as it streams through a network chokepoint, then send instructions to the firewall and other security appliances when they find something amiss.
The advantage that in-line DPI systems have is that holding the packets in buffer allows them to handle the packets themselves before they’re sent on their way—intercepting their content, and repackaging it, “forging” packets with new data or removing data from within packet streams before it passes, altering data in flight. Spam-blocking firewalls, for example, use DPI to identify inbound e-mail message streams and check their headers and content for known spammers, viruses, phishing attacks, and other potentially harmful content. The firewall then reroutes those messages to quarantine or remove attachments entirely.
Web-filtering firewalls check outbound and inbound Web traffic for visits to sites that violate certain policies, or watch for Web-based malware attacks. Bivio’s Network Content Control System, for example, uses in-line DPI to allow network customers to set “parental controls” on their Internet traffic—evaluating the domains of websites as well as the content itself for adult or objectionable content within social networking sites and blogs. The network “pharms” attacks that use malicious DNS servers to hijack Web requests to another server (such as those attacked by the DNSChanger botnet).
Others go further, using their role at the edge of an enterprise network as a proxy for network clients to decrypting Secure Socket Layer (SSL) content in Web sessions, essentially executing a “man in the middle” attack on their users. Barracuda Networks, for example, recently introduced a new version of its firewall firmware that adds new social network monitoring features that can decrypt SSL traffic to Facebook and other social networking services, then check the content of traffic for policy violations (including playing Facebook games during work hours).
Companies want these capabilities for a variety of reasons that fall loosely under “security”—including compliance with “e-discovery” requirements and preventing confidential data loss. But those capabilities also can be used for more wide-ranging monitoring of network users. For example, 13 ofBlue Coat’s application firewalls were illegally transferred to Syria by way of a distributor in Dubai. The Web-filtering capabilities were allegedly used by the Syrian government to identify bloggers and Facebook users that expressed anti-government views within the country.
The privacy risks created by corporate use of these systems is significantly larger than that posed by government surveillance in the US, the EFF’s Eckersley said. “The systems that Barracuda and other companies are building are ripe for abuse. They have a small and debatable range of legitimate uses, and a large number of potentially illegitimate uses.” The ability to essentially run “man-in-the-middle” attacks on a large scale against employees and customers that these tools provide, he said, creates the risk of the data being abused by the company or IT staff.
DPI applications go far beyond simply enforcing policy. Once network operators started using DPI-based systems for security, other applications outside of security became possible as well. “The first one outside of the security market to use DPI was the (network) traffic management space,” said Bivio’s Amir. Companies such as Sandvine and Procera Networks built network traffic management systems that used DPI to improve overall network performance by giving priority to specific types of network traffic, performing “traffic shaping” or “packet shaping” to throttle bandwidth for some applications while giving priority to others.
“There’s no limit to the data you can extract from the payload,” said Amir. “But there’s a tradeoff of how much data you’re going to extract with how much storage capacity that’s going to take.”
“We can do a better job with network quality of service if the QOS is based on applications, and maybe subscribers, and use information that’s in the data flow already, but not if you just looked at IP addresses,” Amir explained.
Other behavior-based marketing companies, such as Phorm, continue to offer “Web personalization” services that include discovery of users’ interests integrated with DPI-based Web security to block malicious sites. Another firm, Global File Registry, aims to go further, by injecting ISPs’ own advertisements into search-engine results through DPI and packet forging. The company has combined file-recognition technology from Kazaa with DPI to make it possible for ISPs to re-route links to pirated files online to sites offering to sell licensed versions of them.
Comcast has already tested the anti-piracy waters with DPI, running afoul of the FCC’s efforts to enforce network neutrality. The company’s ISP business, which uses Sandvine’s DPI technology, moved to block peer-to-peer file sharers using BitTorrent as part of its traffic management. The FCC ordered Comcast to stop (primarily because Comcast was injecting forged packets into network traffic to shut down BitTorrent sessions), but that order was later struck down by a Federal appeals court.
But these systems were designed for making quick decisions about traffic. And while they generally have reporting features that can give security managers and analysts insight into what traffic (and which user) has violated a particular policy, there’s a limit to how much information about that traffic they can capture effectively.
Drinking from the fire hose
On the other end of the spectrum is packet capture technology, which monitors the traffic passing through a network interface and records all of it to disk storage for forensic analysis. When analyzed with the right tools, packet capture tools such as the DeepSee appliances from Solera Networks can allow for security analysts to reconstruct the entirety of transactions between two systems across the Internet gateway at sustained rates of five gigabits per second and peaks in traffic up to 10 gigabits per second. That adds up to daily data captures of about 54 terabytes. Even at Solera’s advertised compression ration of 10:1 in its new Solera DB storage architecture, the cost of storing all that data, especially for larger networks, quickly adds up.
Packet capture is “valuable, but it’s limited,” Amir said. “You can’t record the whole Internet; you can’t record things in an unlimited fashion and expect to have anything meaningful to go back to. That’s a short-term solution for smaller networks. What if there was a breach that you discover three or four months later? How do you go back and see what happened on your network? That technology has not been developed until very recently.”
That technology is actually a synthesis of two. The first is DPI-based network monitoring systems that pre-process network data—capturing and storing not entire packets, but selective metadata from them and their aggregated application data such as e-mail attachments, instant messages, and social media posts.
“There’s no limit to the data you can extract from the payload, as long as you understand the payload,” said Amir. “But there’s a tradeoff of how much data you’re going to extract with how much storage capacity that’s going to take. If you go too deep, you’re sliding toward the packet capture realm. If you extract too little, you’re essentially back to IP logs which aren’t terribly useful.”
NarusInsight, Narus’ DPI-based network monitoring and capture tool, is designed to find a balance to that equation. It uses a network probe device called Intelligent Traffic Analyzer, which gets “tapped” into a network choke point. “There are usually six to 14 tap points in an enterprise network” belonging to customers of the scale Narus usually deals with, said Narus’ Harrington, “usually at the uplinks to the network backbone.”
Instead of grabbing everything that passes, the ITA watches for anomalies in traffic and aggregates packets into two kinds of “vectors” for each session: a human-readable transcript of all the packets in a particular connection, and an aggregation of all the application data that was sent in that session.
Narus’ ITAs support network taps of 100 megabits to 10 gigabits per second speeds in full duplex, meaning they could face traffic rates up to 20 gigabits per second. The amount of that data that can be captured and processed “all depends on processing that needs to be done,” Harrington said, and that depends on how many parameters (or “tag pairs”) the system is configured to detect.
“Typically with a 10 gigabit Ethernet interface, we would see a throughput rate of up to 12 gigabits per second with everything turned on. So out of the possible 20 gigabits, we see about 12. If we turn off tag pairs that we’re not interested in, we can make it more efficient.”
The data from the ITA is then sent using a proprietary messaging protocol to a collection of logic servers—virtual machines running in rackmounted Dell server hardware that further aggregate and process the data. A single Narus ITA can process the full contents of 1.5 gigabytes worth of packet data per second—5400 gigabytes per hour, or 129.6 terabytes per day per network tap. By the time the data is processed into aggregated results by the logic servers, petabytes of daily raw network traffic have been reduced down to gigabytes of tabular data and captured application data.
But as impressive as the analytical power of a NarusInsight environment is, there are still limits to the type of analysis that can be done by pattern matching in a small window of data. Unknown threats to security—”zero day” exploits for which there are no known signatures that evade statistical analysis by disguising themselves as legitimate network traffic—could slip by DPI tools by themselves. This can happen even if there are signs elsewhere in IT systems, such as server system logs and system auditing tools, that something is amiss.
For Narus, users, that typically means exporting the data out of NarusInsight’s analytical environment to another tool for forensics investigation and other deeper analysis. This could be a data warehouse or a “big data” analytical database like Palantir, Hadoop-based systems like Cloudera and Hortonworks, or Splunk. Narus recently announced a partnership with Teradata to provide for large-scale analytics of NarusInsight’s output, using Tableau’s visualization software and analytical SQL queries as a front end for analysts.
“We provide our customers with a starting kit, a common dashboard” for analysis, Harrington said. From there, they can summarize and aggregate the information from the various log data, which are stored in Teradata’s multidimensional data warehouse format. And, he added, Narus is working on a Hadoop-based analytical tool using MapReduce processes to dig even further into network traffic patterns.
But other players in the network security market are moving to put the analytical power of big data systems at the center of their network monitoring solutions, rather than as an add-on. Big-volume, high-speed data storage and management technologies like Hadoop grew out of the needs of “hyperscale” Web services such as Google. By harnessing this power, data analysis software from Splunk and LogRhythm or integrated solutions such as Bivio’s NetFalcon make it possible to throw much deeper analytical horsepower at DPI data and aggregate it with other sources, both in real time and as part of long-ranging forensic analysis.
NetFalcon launched as a product just over a year ago. It uses a columnar database format similar to Google’s BigTable and Teradata’s Aster database systems as its data store, and can perform both real-time and after-the-fact analysis on data picked up by its network probes. Each probe can handle up to 10 gigabits per second, and the “correlation engine” that takes in all of the inputs can pull in over 100 gigabits per second for processing. NetFalcon’s “retention server” database takes inputs not only from the system’s network probes, but also pulls in feeds from external log sources, Simple Network Management Protocol “trap” events, and other databases. It correlates all the traffic and event data for weeks or even months. “Hundreds of terabytes or petabytes of data, but laid out in such a way that you can do queries and searches very rapidly,” Amir said.
In an enterprise environment, Bivio could store months of data from these sources; in law enforcement applications, that data could scale to years. “We’re not storing the network data, we’re classifying it, categorizing it, breaking it up into its constituent pieces based on DPI, preprocessing it, and correlating it with external events,” Amir explained. The sources of information that could be pulled into NetFalcon’s database extend beyond the typical IT sources. “You could correlate info you’re getting over a mobile network along with geolocation data,” he continued. “Then when you’re doing the analytics, have the data right there and take advantage of it.” Some of the potential uses include correlating physical devices with online accounts to uncover individuals’ online identities, and establishing the connections between individuals by mapping their network interactions.
Splunk allows organizations to do the same sort of fused analysis, taking in data generated from an organizations’ existing DPI-powered systems and combining it with server logs or just about any other machine or human generated data that an organization would want to pull in. Splunk is designed to be able to process large quantities of raw ASCII data from nearly any source, applying MapReduce functions to the contents to extract fields from the raw data, index it, and perform analytical and statistical queries. Director of Marketing for Security and Compliance Mark Seward, described it to Ars as “Google meets Excel.”
Splunk can also distribute its flat-file databases across multiple file stores. The store for a particular application “can be a 10 terabyte flat file distributed across multiple offices around the globe,” Seward said. “When you search Splunk from a search head, it doesn’t care where the data is. It sees it all as virtual flat file.”
While Splunk is a general-purpose analytics system, there are enterprise security and forensics dashboards that have been prebuilt for it, and there’s an existing marketplace of analytics applications that can be put on top of the system to do different sorts of analysis. “We have a site called Splunkbase that has over 300 apps,” Seward said, “about 40 of which are security apps written by our engineers or by customers. A couple [apps] are integrations with Solera and NetWitness.” Even raw packet data can be dumped in ASCII into Splunk in real time and time-indexed, if someone wants to go to that level of detail.
The addition of log and other data from the network is essential to catching security problems caused by things like an employee bringing a device to work that has been infected by malware or otherwise been exploited, Seward said. “What security analysts are finding is that the security architecture of the enterprise gets bypassed when you have people bring their own device to work. Those can get spearphished, or get malware, and when they come in they can allow attacks in that bypass half the gear you have to detect intrusions. Malware does its thing behind your credentials.”
Having access to authentication data for users, and combining it with location information—such as when they’ve used an electronic key card to enter or leave a building, or when they log into various applications—allows systems like Splunk and NetFalcon to find a baseline pattern in people’s behavior and watch for unusual activities. “You have to think like a criminal,” Seward said, “and monitor for credentialed activities that, looked at in a time-indexed pattern, look odd.”
One reason why companies are increasingly interested in tools like NetFalcon and Splunk is for “data loss prevention”—blocking leaks of sensitive corporate data via e-mail, social media, and instant messaging, or the wholesale theft of data by hackers and malware using encrypted and anonymized channels.
“TOR is a good example,” Amir said. “Things like onion routers are sophisticated tools designed exactly to circumvent real-time mechanisms that would block that sort of traffic.” Analysts and administrators could search for traffic going to known onion router endpoints, and follow the trail within their own networks back to the originating systems.
Because these systems have a long memory, they’re able to catch patterns over longer periods of time and spot them instantly when they occur again, acting on them automatically. Both NetFalcon and Splunk are capable of launching automated responses to what gets discovered in data. In Splunk, the events are launched by continuous real-time searches of data as it’s streamed. NetFalcon’s “triggering” works in a similar way, as NetFalcon’s correlation engine processes incoming packet data, or when patterns are found when running an analytical query. Those actions could be sending configuration changes to a firewall, changing the settings on network capture devices, or sending an alert to an administrator about a problem.
Security on a budget
NetFalcon is targeted at very specific audiences: law enforcement agencies, telecom carriers and large ISPS, and very large companies in heavily regulated or secretive industries willing to pay for what amounts to an intelligence community grade solution. But for other organizations that already have application firewalls, intrusion detection systems or other DPI systems installed, there may not be a budget or need for Bivio’s type of technology. Take, for example, the University of Scranton, which uses Splunk to drive its information security operations.
Unlike NetFalcon, Splunk “is a huge database, but it doesn’t come with preconfigured alerts,” said Anthony Maszeroski, Information Security Manager at the University of Scranton (located in Scranton, Pennsylvania). The university has about 5,200 students—about half of whom live on campus—and has turned Splunk into the hub of its network security operations, using it to automate a large percentage of its responses to emerging threats.
Maszeroski said the IT department at Scranton pulls in data from a variety of systems. The campus’ wireless and wired routers send logs for Dynamic Host Configuration Protocol and Network Address Translation events to Splunk, which includes the physical MAC address of the devices connecting with a timestamp. This allows administrators to search the database by device address and follow where they’ve connected from on campus. The database also pulls in information on outbound DNS queries and other types of application traffic, enterprise system logs, and events from the University’s intrusion prevention system. The Splunk database of the University of Scranton Information Security Office is “close to a terabyte” in size, Maszeroski said, and “our standard op procedure is to throw everything away after 90 days. We’re also limited by budget and storage capacity.”
“Our advice is not to work for employers who demand to survey you in the office.”
One frequent activity that Splunk has helped the University automate is processing Digital Millennium Copyright Act takedown notices after a student is discovered hosting pirated content on sites hosted from their own computers or over BitTorrent streams. “We needed an automated, instant way of locking those down,” Maszeroski said. Data brought into Splunk can be used to perform a search for BitTorrent traffic and allows it to be identified by MAC address; the University’s information security office has built a Java application that uses Splunk’s Web API to find the offending MAC address and then “cut the person off at a switch or wireless level.”
DHCP data can be used to track down where offending devices are. And the DHCP log data allows the information security office to help the University’s public safety department look for stolen assets. When someone reports a stolen laptop or tablet, the office can do a quick search to see where it has been on the campus network and if it’s still connected.
Splunk’s dash also makes it easier to pick up on things that fall outside the norm. “We can do a statistical look at logs to see if an account is sending too much e-mail to check for compromised Web mail accounts,” Maszeroski said. “Also, it’s very unusual for someone to be logging into our Web server from Nigeria. We can look for multiple usernames logging in from one IP address, or look for one logging in from different geographic areas.” The same goes for the University’s VPNs.
“If there’s an event we’re absolutely certain is an indication of badness, we can programmatically run a script within a minute to cut off IP address at our network perimeter.”
Yes, these capabilities make it possible for organizations to both prevent security breaches and track down the reasons for the ones that slip by. But the ability to survey almost any kind of network traffic and combine it in real-time with location-based data (plus other physical world information) then store it indefinitely is a huge privacy concern. Even without logging on, individuals can leave patterns identifying themselves in their digital footprints that could be used by others for less-than-ethical purposes, said EFF’s Eckersley.
“If you’re in the habit of loading a few particular blogs,” Eckersley said, “that pattern will be repeated whether you’re in the office or at home. If networks end up with extensively deployed pattern recognition systems, users are going to need very strong assurances that the data isn’t being kept. And it’s going to be difficult for companies to give that sort of assurance, because the tendency is to keep everything. Our advice is not to work for employers who demand to survey you in the office.”
And companies in some parts of the world, including ISPs, may soon find themselves being asked to keep everything. In the UK, for example, a proposed law announced in the Queen’s Speech in Aprilwould require ISPs and others to retain metadata obtained from deep packet inspection for digital communications—e-mails, text messages, instant messages and webpage visits, among other things—for up to a year.
In the US, Senator Joe Lieberman’s Cybersecurity Act of 2012 would have pushed for larger use of systems like NetFalcon and other DPI-based systems that provide “continuous monitoring” within government. It would have explicitly given private network operators the go-ahead “notwithstanding the… Foreign Intelligence Surveillance Act of 1978… and the Communications Act of 1934” to survey their networks and share information collected that might have some bearing on cybersecurity with the Department of Homeland Security and other agencies. The bill was filibustered by Republicans because of regulations it put on industry, but parts of the bill may be pushed forward by the Obama administration as part of an executive order.
Perhaps the proliferation of such surveillance is inevitable—it is what allowed the Olympics to proceed without any major incident, after all. And certainly, the use of big data analytics would be an improvement on some of the electronic intelligence systems currently used by US agencies, considering the recent revelations about the sad state of the FBI’s management of surveillance data. But the fact remains that these systems, as automated as they are, are only as good as the people who use them—both in terms of performance and privacy.
Before he became corporate vice president and chief technology officer of the Beth Abraham Family of Health Services–which is the second-largest long-term care provider in New York State–Steven Polinski worked for Goldman Sachs. Beth Abraham’s mission brings a somewhat different set of problems, but it’s no less complex than those on Wall Street. It’s just a different kind of complex.
Beth Abraham (BA) is a $700 million not-for-profit with about 40 locations and two major business areas. One half of BA is a focused on long-term care programs, Polinski explained. These include four nursing care facilities with a total of 1,198 beds, a long-term home healthcare program with 100 visiting nurses and about 1,100 patients, seven adult daycare centers, as well as several hospice and smaller programs around the New York metropolitan area.
The other half of the business is a Program of All-Inclusive Care for the Elderly (PACE), a service that manages care providers. PACE includes Comprehensive Care Management Corp.–the largest of such program in the country–employing 200 nurses who deliver home healthcare in New York City and surrounding areas.
The two businesses have very different IT requirements. When Beth Abraham started its PACE program 20 years ago, it was experimental. “The concept of PACE is, it’s all-inclusive care. We get a fixed dollar amount per member from Medicare on a monthly basis, and we have the responsibility to provide all the healthcare services, including all sorts of preventive care, to keep those people as healthy as possible. We’re responsible for all the medical bills, so we do the best we can to keep them healthy. That makes Beth Abraham not just the healthcare provider, but also an insurer,” Polinski explained.
Since it started as an experimental program, Polinski says there was no commercial software available to support it. As a result, the PACE program employed custom-written software, and the organization has built up a proprietary software platform to handle care management and other operational aspects over the years. At the same time, BA has two separate EMR systems: a hosted SigmaCare solution at three of its four nursing facilities, deployed in 2009 and 2010 (with the fourth scheduled to roll out in April); and an in-house system operated by McKesson Horizon EMR for home healthcare and adult daycare sites.
Moving to a new system for the managed care business that offers better EMR functionality (and meaningful use certification) is at the top of Beth Abraham’s IT agenda. There’s also the issue of growing storage requirements for its three small in-house data centers, and the cost of networking a growing number of remote sites on the corporate WAN. There’s also a strong impetus to make IT more efficient.
How to surmount all these challenges? Polinski discusses his strategy in my next post.
I’ve got a brief profile of Army Program Executive Officer for Enterprise Information Systems Gary Winkler appearing in tomorrow’s FedTech Bisnow. But there’s only so much you can shove into an email newsletter. So here’s some of what Mr. Winkler had to say as he prepares to leave government service, raw and uncut. Be sure to pay attention to what he says about the mounting federal government talent drain…
(On succession plan:)
For the interim, Ms. Terry Watson, the deputy, will be acting. Dr. ONeil is still contemplating what the long-term succession plan will be. There are some options — he could move somebody else in here, and Terry could stay the deputy . Dr. ONeil could “harvest my slot” — he looks across 13 PEOs and the SMT organization he has, and he may need the SES slot somewhere else, knowing Terry has been in the PEO for most of her career and knows our business area very well — he might be comfortable keeping her in the PEO position and using my slot somewhere else thrpught ASALT, maybe in the SMT community, Then we would go back to having a PEO and a military deputy at the col. level, which is what we’ve had before Terry came in back in December.
(On why he’s quitting now:)
I’ve been here close to 4 years, just past the 3 1/2 year mark, and I think we’ve done a lot. We’ve restructured, developed a lot of our staff, we have stability in the program offices, we have a strategic plan, we have a strategy map, a balanced scorecard we measure our performance against monthly — we’ve got a very mature Lean Six Sigma organization, and make sure that we do continuous process improvement.
We just have come a long way in the past 3 years from an org. maturity standpoint so the org doesn’t have to rely on superstars, and no one is a single point of failure — including myself. We’ve got processes in place and great people throughout. So now is an appropriate time for me to move on — I feel like I’ve done all I can do here except doing the same., And what I’ve been focusing on in the last 6 mos to a year is developing our workforce, our younger leaders, because a lot of the programs are being very well executed. So I feel pretty good about where our office is, I need some more challenges.
(On Federal and DOD IT consolidation plans:)
(DOD consolidation roadmap) #00:06:08.0#
I think we have been working toward all of those (Kundra’s ) objectives all along. Kundra’s 25 points on where he wants CIOS to go, we’ve been working toward that direction long before he came into the office. So from a strategic, operational and tactical standpoint. I don’t see too many changes for our programs. We’re trying to move our apps into data centers, whether they’re DISA, Army or commercial; we have a procurement in source selection which should be completed in a month or two for commercial data center services.So I don’t see too many changes. It’s all good. And that shouldnt be surprising because we’ve been in business for a while here. It will have more of an impact for organizations that have not had information technology systems acquisitions as their core mission — there will be a lot more changes for those who haven’t been doing what we do all the time.
(On his biggest challenges:)
The biggest challenge for anybody with this job is Time management — there’s just not enough time in the day, or night or weekend or holiday . There are a lot of programs in this PEO, and they’re very diverse. Just working the actions, knowing the issues and working them up at the headquarters level or the OSD level just takes a lot of time. Every one of our programs has a general officer sponsor, so I’m dealing with 30 to 40 general officers on a continual basis to address the hard problems and hard challenges, and those are the ones that usually cross org. boundaries. The tech issues aren’t so much a challenge, it’s all the other elements, wether it’s doctrine, organization, personnel, facilities, money… I don’t see money as a super big issue but the budgets are going down, so our PEO staff are going to have to be as creative as possible to keep progs moving forward to deliver capabilities on schedule as resources shrink.
(On applying Lean Six Sigma across procurement:)
I do think we should apply Lean more widely. The problem is a lot of that is outside our control. I can only control what we execute inside PEO EIS — a lot of the contracting process is really outside of our organization, so we work the pre-solicitation materials, but once an RFP goes out on the street we lose control of the procurement and contracting process after that., It’s really up to the contracting orgs. I’d like to see more application of lEan 6 sigma in the contracting world.
(On the mounting talent drain from government, and whether new career paths like the Program Manager track will help:)
I don’t think so. I think there’s going to be such a squeeze on money that it’s going to be hard to develop new career tracks, courses, and training. That’s all an investment and I would be surprised if it happens. It would be nice, but I think our professionals and our younger work force are going to learn through experience more than anything else — they’ll get acquisition certified, but anything above and beyond they’ll be swamped in doing hte work their mission requires, The support contracting workforce is supposed to go down, The government workforce is going to shrink. It’ll shrink through attrition and hiring freezes like we’ve had. In the Army, we’re supposed to attrit 10,000 people civilians out of the workforce over the next three years. So, I think it’s going to be a big challenge. As people move up into more senior leader positions, do they have the experience, training and knowledge to do a really good job in those more senior positions? I think they’re going to need some help.
(So, government is going to need to lean on private sector more?)
I think so. I think as with every other industry there will be a shakeout . And government suppporet contractors — you see that from time to time in other industries, where there’s a weeding out of different companies, and the market shrinks, but the ones left standing will be the ones that provide the best capability for the money, and provide government agencies the best expertise at the best price.
(that’s the business you’re moving into?)
That’s where I can see that I can contribute and add value . I don’t need to malke a lot of money, I just have to pay the bills. and if I can capture people leaving the govt workforce, for whatever reason they leave, whether its a pay freeze or they’re just frustrated — they’re leaving not because they don’t like the mission but morale issues. So if I can capture them, take care of the morale issues and keep them working on the gov side helping those new leaders, it’s win win. I know right now is exactly the wrong time to get into government support contracting, but if someone is in there providing great support at a great price, they’re going to do well as opposed to some of the companies that haven’t differentiated themselves.
(On the morale of senior folks in fed tech. )
That’s how I qualify it (morale issue). It’s probably a mixture. A pay freeze doesn’t help. The technical people are in demand ,and they have options, and the new retirement system people have options. So no longer are civil servants held by the golden handcuffs of staying in until they’re 55 and having at least 20 years, and if they leave before that they have no retirement. Under FERS, vested after 3 yrs of service, get a pension when you hit 62 which is 1% of avg of high 3 salaries x num of years you worked.
Pension isn’t as good as the old system, but then again people can leave. And I’m not sure the Army or gov. senior folks recognize that paradigm shift — that they now have a mobile workforce where people in demand don’t have to stay until they’re 55 and a min of 20 yrs of service.Unfortunately, I think the government is going to see a lot of good people leave because they can, and they want to do more.
That’s a good one. No. I don’t mind working 15 hour days. It becomes a habit after a while. No, I actually have 3 or 4 months of vacation that the Army is going to have to pay me for. So that will sustain me for the near term. The big benefit of being an SES is you get to roll over more vacation because you don’t get to take it, and you get better parking spots.
Cross my fingers I can pay the bills — I’m used to being poor, I’m a gov. employee, so I wouldn’t know what I would do with more money.
My motivation is I can do more. I love the job here, love the people and the mission, but I feel I can do more. Unfortunate that I’ll be banned from the Army for one year, so I’ll have to go help the OSD, and the Navy, and the Air Force and Cyber and Agriculture and other orgs that need my help. I think I can help them. I’ve got all the bruises and scars from working in this business over the years.
(Things that were important to your professional development?)
Professionally, not knowing what the heck I wanted to do, and bouncing around doing a variety of things, and never feeling like I fit in anywhere. So that seemed to work pretty well here. There’s a good hodgepodge of programs here, and I have a technical background, and I have a business background too. I worked in private industry, and then I came back into the government, and I worked at headquarters, I worked here, I worked in an Air Force office, so, I think that diversity and just moving around seemed to be a good fit. When I was in college, I was a EE, but I don’t think I was your typical engineer. Then I went to graduate school and I was an MBA student, but I wasn’t your typical MBA student, because they were wearing blazers and bowties to class, and I came in with jeans and a flannel shirt, then grew a beard, so I didn’t really fit in there either. But that was ok, because I had nearly a 4.0 so they couldn’t give me a hard time. But I’m still trying to fit in somewhere.
(Words of advice for whoever takes over PEO-EIS:)
Just the standard words of advice: don’t screw it up. Somebody has to do things their way, and I think with Terry Watson here everything will go smoothly. We have a great set of directors and PMs , and I think the organization will continue to thrive, even in the challenges that they’re going to face with budgets are shrinking. Even with budgets shrinking, you know Sean, how the IT budget is. Nobody can do anything without technology, so I don’t think this office will be hit as hard as a lot of others.
At the moment, I’m waiting for some sort of confirmation. But this is what I know:
Since Monday, Change.org — a site that hosts petitions and other social action efforts for others–has been the subject of a DDOS attack from China, according to Ben Rattray, Change.org‘s founder. They’ve been working with their hosting company and with cyber experts to help screen out the attack as much as possible, but the site was down much of yesterday. And it’s down today, intermittently.
Interesting fact: Change.org is hosted on Amazon Web Services.
Interesting fact: AWS’ Elastic Compute Cloud data center in Northern Virginia is experiencing an outage of various services, affecting Quora, HootSuite, and other social media companies hosted on it. That would be the same site that Change.org is hosted at primarily, since the NoVA data center is the US East region cloud.
The Chinese have been varying their attack. Is it possible they’ve exploited Amazon’s EC2 APIs to attack now?
I haven’t heard back from Amazon.
(This post was originally posted to the Virtual Integrated Systems public sector blog.)
My wife is a librarian at a county public library. Not to brag, but she excels at helping her patrons find what they’re looking for, either in the stacks or in databases. But when she became a librarian, she wasn’t expecting the degree to which she’d be called upon to provide another service: tech support.
Public libraries have long been a significant service of local governments, but their mission has changed significantly over the last decade, as more of our lives have moved online. Libraries now are where people who don’t have PCs at home or work go to do everything from check their e-mail to apply for jobs, and librarians are increasingly called upon to help with basic computer literacy issues as frequently as they’re asked a research question or asked to recommend a book, if not more often.
But managing the configuration and security of public computers at the library can be an expensive undertaking. With budgets shrinking, adding more computers or even maintaining the ones that are in place can be difficult. The cost of adding software licenses for operating systems and applications can quickly outstrip the basic hardware cost. And with patrons bringing removable media to the library and accessing potentially malicious sites, the security risks are high.
Given their limited number of PCs, libraries have to restrict the amount of time patrons can use systems. The software they use to meter usage and assign computers can often create difficulties both for the patrons and the librarians who serve them.
Then there’s the issue of how to better serve customers who bring their own technology to the library, who may wish to use resources such as databases. While some libraries offer Web portals to access these services, the cost of setting up such systems can be prohibitive to mid-sized and smaller public libraries–especially when budgets are tight.
The City of Staunton, Virginia, for example, faced many of these problems with the operation of its public library, according to Kurt Plowman, the city’s CTO. Mounting maintenance problems and malware issues left the library’s computers unusable as much as 50 percent of the time.
“Our resources were stretched thin, so spending several hours a week fixing software problems and replacing parts was becoming a never-ending nightmare,” he said recently. “The public library was overdue for a solution.”
Plowman turned to desktop virtualization as a solution, using thin clients from Pano Logic to replace the library’s aging desktops. The city used VMware to serve up virtual desktops on demand to the terminals, clearing each session at its end and preventing the storage of any data on a shared hard disk by the user.
With virtual desktops, each user gets a fresh, controlled configuration, locked down from potential security threats. That means fewer helpdesk calls, fewer frustrated patrons, and much lower desktop support costs for the city, which is considering expanding the virtual desktop model to other departments of city government.
The City of Staunton was recognized for this solution with a Governor’s Technology Award for Innovation in Local Government at the Commonwealth of Virginia Technology Symposium in September 2010.
Chris Kemp, who had a few short weeks ago been greeted with rockstar fervor at the Cloud/Gov conference in Washington, DC, has stepped down from his role as NASA’s Chief Technology Officer for Information Technology. Kemp was the champion of NASA’s Nebula program, the agency’s private cloud effort , and helped with the General Services Administration’s launch of the Apps.gov cloud service program. But in the face of budget cuts and continued institutional resistance to his agenda for changing government IT, Kemp submitted his resignation in March.
“Whereas I thought I had the best of both worlds being a Headquarters employee stationed in Silicon Valley,” Kemp said in a blog post announcing his move, “I actually had the worst of both worlds… no influence when I can’t be in all of those meetings at NASA HQ, with no mandate to manage projects at Ames. As budgets kept getting cut and continuing resolutions from Congress continued to make funding unavailable, I saw my vision for the future slowly slip further from my grasp.”
Kemp’s dillema, while certainly higher profile than that of many state and local CIOs and CTOs, is hardly unique. With revenues at historic lows, and budgets tight, it’s perhaps harder than ever to try to achieve meaningful change in the way agencies run their information technology, even at tech-focused agencies like NASA. At the federal level, the budget standoff threatens to put major initiatives that could actually save the government more money on hold.
But perhaps more dangerous, the uncertainties around IT budgets and programs at all levels of government can be demoralizing, particularly to the most talented and valuable members of IT organizations who have options elsewhere. As other employment opportunities emerge, government IT organizations could see an exodus of talent, making it even more difficult to do more with less.