Superstorm Sandy has left more than a trail of devastation in neighborhoods; she continues to wreak havoc on businesses throughout the area. A friend of mine is relegated to indefinite telecommuter status because salt-water flooding compromised infrastructure at her company’s Manhattan headquarters.
Like many of the businesses in the strike zone, IT teams’ disaster recovery scenarios are under the microscope. Are there enough software licenses to cover everyone who needs to access applications remotely? Can the servers handle a maximum load of remote users all day, every day? Are employees equipped with appropriate hardware, software, and connectivity to carry out their jobs?
Businesses in New York, New Jersey, and nearby regions are finding this out the hard way right now. All others should study the responses closely, analyze their own disaster recovery and business continuity plans, perform drills in the near future, and watch for postmortems from their industry peers.
Before you can assess your ability to handle a full-time, remote workforce, you first have to analyze your workflow. Chart each person’s role, and the exact applications and centralized data he or she would need access to in a scenario. For instance, if the employee is responsible for customer service, he will not need access to the financials database. Being this specific becomes essential as your network resources are constrained. You don’t want users consuming bandwidth, server CPU, and licenses for non-related tasks.
Next, use asset management tools to gather an inventory of the software, hardware, and platforms in use across the enterprise. Check versions, security patches, and overall configurations to ensure they are able to handle the rigors of remote access. If not, budget for an upgrade as soon as possible. Users will not suffer a spinning hourglass when chaos is erupting around them.
Have all your workers telecommute for a day. This might sound crazy, but you’ll never really know how the ecosystem (human and technology) will respond until it is under that type of duress. Take note of every aspect:
What kind of support do users need? Can you offer that training upfront or pre-distribute how-tos? Will you need an emergency help desk to get users up and running?
Did the servers hit maximum utilization, and at what point? Did virtualization help balance the load, or do certain applications need to be reconfigured? Do you need to implement better prioritization so that mission-critical applications always have the CPU power they require? Performance management tools, both onboard servers and third-party, measure and analyze this data for you.
How did your bandwidth hold up? Network monitoring and traffic analysis tools illustrate capacity issues during peak times as well as usage patterns. While an actual disaster would skew these results, it gets you closer to providing a stable network for access. Again, you might have to set priorities so that voice over IP, video, and other communications get through with low latency.
Do you have enough licenses to support a remote workforce? If users ultimately have to rely on applications such as Web-based mail that they might not otherwise use, then you’ll need enough seats to accommodate everyone. Executives and HR use email in a disaster to gather and disseminate status updates and other important information.
How will you get data re-centralized? Users will be forced to work offline because of spotty connectivity and other issues. You’ll have to ensure that whatever documents pile up on their laptops gets back to the datacenter without overwriting other versions.
What is the state of your security? In a disaster, IT can be tempted/pressured to compromise on its tough remote and mobile security stance just to get people up and running. However, doing so can have long-term, destructive consequences.
A lot of people will use their own devices, so make sure they are educated and trained to safely access applications and data. Also, while it might seem a slam on productivity, if working at public hotspots is considered too risky and against compliance outside of a disaster, the same holds true during a disaster. For instance, employees cannot work with sensitive user or corporate data at a coffee shop just because it has Wi-Fi and power. Those employees might require a pre-assigned temporary office with full network security.
Most companies prepare for short-term inconveniences, such as a Nor’Easter taking out power at headquarters. Superstorm Sandy has taught us that serious geographic, operational, and infrastructure damage can take a company offline indefinitely if IT, workers, their devices, and the datacenter, aren’t properly prepared.
Will Superstorm Sandy change how you conduct disaster recovery analysis? Share on the comment board below.
@Beth The decision was probably along the lines of, let's just keep what we have rather than spend our profits on upgrading equipment. Rates go up every summer, not because the utility is spending more but because demand increases with the use of air conditioning. They drop in winter when the price of gas goes up. The thing is that when so many people are counting on these things, you'd better not rely on just shuffling through in a best case scenario but on holding up in a worst-case scenario.
Ariella, I think Cordaro captured the issue with the quote on providing cheap electric power to customers. I'd love to see the ROI work the utility has done on infrastructure upgrades, assuming it's done those exercises in determining not to upgrade. Can it deliver service to customers more cheaply on old infrastructure than it can with upgraded infrastructure and automated systems? How did storm damage play into those assessments? Was the risk of not being prepared outweighed by the low cost of the current infrastructure? (Because, let's not forget, when rates increase so do customer complaints.) I'm certainly not justifying the utlity's shoddy infrastructure -- just wondering how it was making its decisions!
@Sandy related to this is the question of the slow response on the part of the utlilities. LIPA has drawn particular attention to itself in this regard. Today's Newsdsay has an article, Why LIPA failed: Utility ignored warnings it wasn't ready for major storm Primarily it was because they just didn't bother to upgrade what needed to be replaced. But there was also very poor analytics involved. While ConEd could show detailed maps of outages, LIPA's didn't. As the article recounts:
"a Newsday reporter at the Hicksville headquarters of National Grid — the company contracted by LIPA to oversee operations — saw engineers who were using highlighters and paper maps to track thousands of outages, as ratepayers banged in frustration on the building's locked front doors."
Of course, the area got a double whammy with the nor'easter: "Ten days after the superstorm battered the region, more than 170,000 Long Islanders were still without power. The nor'easter on Wednesday piled on with another 90,000 outages."
My understanding is there was a similar scenario for PSE&G in NJ, though they do serve more customers overall.
But to get back to the infrastructure:
The utility's infrastructure has changed little since Gloria, said Matthew Cordaro, who served as vice president of engineering at LIPA's predecessor, the Long Island Lighting Co., when that hurricane struck.
"I think somewhere along the way they lost sight of what the primary mission of a utility is," Cordaro said Thursday, "and that is to provide cheap electric power to customers."
Alexandra von Meier, the co-director of electric grid research at the California Institute for Energy and Environment, said other utilities face similar challenges.
"I don't think it's very unusual to have very old and clunky technology in their power distribution context," she said. "If they were more modern ... restoration could be faster, and we all want that."
More than a half-million residents lost power for a week after Irene. Cuomo — who said that "at a minimum, LIPA did a terrible job of communicating" following that tropical storm — requested a review of the Uniondale-based utility.
The resulting report concluded that LIPA and National Grid did not meet industry standards in dozens of aspects concerning planning and recovery in major storms.
Even fax machines and other basic office equipment were unavailable or broken at substations, the facilities that transfer power to thousands of homes, hindering communication. One substation coordinator reported having to run to a local office supply store to purchase a printer.
Beth, it's hard to say whether the tools out there are the tools that are most needed in a situation like Sandy. In fact, I'd venture to say you'll most likely see some technology and training tweaking in the wake of this storm.
I think that as cloud computing (SaaS, PaaS, IaaS, etc.) takes hold, IT has to go back to its core values of network monitoring, traffic analysis, server analysis and all that good stuff that is supposed to be standard practice. This might have been the wake-up call on that score.
As for the necessary investments, hard to say. I'm sure it varies according to the architecture of the data center, age of the company and other factors. I do believe that everyone should be revisiting their own capabilities and that this should eliminate any remaining "it's not going to happen to me" attitudes.
Let's not forget shortly before Sandy, the West Coast, including Hawaii was under watch for a tsunami. It can happen anywhere.
Sandra, do you think IT vendors are doing enough to deliver products that enterprises can use to analyze how well their network infrastructures, application performance, security, etc. are doing? And, I should add, that allow customers a way to absorb the analysis easily (for example, via interactive data visualizations on network management, app performance, or security dashboards)? And, if so, do you think enterprise IT execs are taking these types of tools seriously and making the necessary investments in them?
The storm revealed some ugly truths for businesses who are using third-party vendors -- and discovered after the fact that some of them lacked geographically dispersed backup systems and/or redundant power sources.
for the Business and IT Communities Executive forums with additional hands-on learning opportunities offered around the world
Each ideal for practitioners, Business leaders & senior executives
SAS Health Analytics Virtual ConferenceThe Health care is rapidly transforming. And there has never been a greater need for analytics. We're tackling tough challenges like data transparency, care delivery, consumer engagement, and financial and clinical risk. And there are still numerous opportunities to use health data that we haven't even tapped into.
2014 VA Interactive Roadshow -- HoustonThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- New YorkThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- Rockville, MDThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- DetroitThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- ChicagoThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- Cary, NCThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- BostonThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
2014 VA Interactive Roadshow -- AtlantaThe 2014 VA Interactive Roadshow will feature SAS® Data Management and SAS® Visual Analytics experts covering topics like prepping data for VA and VA integration with SAS® Office Analytics. This year's events will keep presentations at a minimum and focus on giving attendees hands-on exposure to the latest version of VA.
Analytics 2014The The Analytics 2014 Conference is a two-day educational event for anyone who is serious about analytics. This annual event brings together hundreds of professionals, industry experts, and leading researchers in the field of analytics. Register before April 30 for the early-bird discount.
LEADERS FROM THE BUSINESS AND IT COMMUNITIES DUEL OVER CRITICAL TECHNOLOGY ISSUES
The Current Discussion
Visual Analytics: Who Carries the Onus? The Issue: Data visualization is an up-and-coming technology for businesses that want to deliver analytical results in a visual way, enabling analysts the ability to spot patterns more easily and business users to absorb the insight at a glance and better understand what questions to ask of the data. But does it make more sense to train everybody to handle the visualization mandate or bring on visualization expertise? Our experts are divided on the question. The Speakers: Hyoun Park, Principal Analyst, Nucleus Research; Jonathan Schwabish, US Economist & Data Visualizer
The big-data analytics market can be a confusing place. Among the vendors vying for your dollars are traditional database management providers, Hadoop startup services, and IT giants. In this video, All Analytics editors Beth Schultz and Michael Steinhart sit down in a Google+ Hangout on Air with Doug Henschen, executive editor of InformationWeek. Henschen discusses use cases for big-data analytics, purchase considerations, and his recent roundup of the top 16 big-data analytics platforms.
At the National Retail Federation BIG Show last month, All Analytics executive editor Michael Steinhart noted a host of solutions for tracking and analyzing customer activity in retail stores. From Bluetooth beacons to RFID tags to NFC connections to video analytics, retailers must find the right combination of tools to help optimize the shopper experience, streamline operations, and boost revenues.
The days when historical shipment trends and gut feelings were enough to forecast retail demand accurately are long over. SAS chief industry consultant Charles Chase outlines the benefits of pulling real-time sales information from point-of-sale and product scanner systems, then flowing that data into dynamic forecasting tools from SAS.
Electronic shelf-edge labels (ESLs) equipped with low-energy Bluetooth beacons enable retailers to deliver real-time customer interaction and execute dynamic pricing strategies. Andrew Dark, CEO of Displaydata, outlines the ESL architecture and explains how it integrates with backend management and analytics systems.
Retailers like Family Dollar and suppliers like Procter & Gamble are using big-data analytics to maximize efficiency and revenue across the entire supply chain. Lori Schafer, Executive Advisor for the SAS Institute Retail Practice, moderated a panel with executives from these companies at the National Retail Federation BIG Show in New York last month. Here, she shares insights on retail supply chain optimization and in-store customer tracking for targeted sales.
EKN Research's "The Rising Importance of Customer Data Privacy in a SoLoMo Retailing Environment" report details the top challenges and opportunities that retailers face when embracing big data analytics. EKN SVP of Research and Principal Analyst Gaurav Pant explains the importance of data management and lays out seven steps that retailers can take to ensure customer privacy while reaping the benefits of big data.
Customer data is fueling a new phase of retail marketing across physical and online channels. Lori Bieda, executive lead for customer intelligence at SAS Americas, explains how integrated insight enables retailers to optimize offers and improve sales across product categories. She also shares some best-practices for leveraging analytics talent in retail.
This year's National Retail Federation BIG Show wrapped up on January 14. All Analytics executive editor Michael Steinhart reviews highlights of the conference and discusses trends around analytics, personalization, omnichannel, and retail security.
In the wake of 2008's financial meltdown, banks are subject to strict regulations around the soundness of their loan portfolios. Capgemini senior manager Rex Pruitt explains how advanced transition matrices -- driven by SAS analytics tools -- help banks perform effective credit loss forecasting and meet their regulatory requirements.
David Bencs, assistant director of Insight and Analytics for the Orlando Magic, outlines different analytics projects and the benefits they're delivering to the NBA franchise. The team put demand-based pricing in place a few years ago, for example, and single-game ticket revenue grew 28% despite a disappointing season. Next up for the Magic is to combine social media activity, television viewership stats, and ticket sales data to achieve a 360-degree customer view.
David Tishgart, senior director of marketing and alliances at security provider Gazzang, explains the importance of data encryption for companies that are rolling out Hadoop environments to leverage big data analytics.
At the Strata Conference / Hadoop World 2013, Samuel Kommu, technical marketing engineer at Cisco Systems, shares some of the benefits that Hadoop brings to analytics platforms that leverage next-generation hardware. Kommu looks at big data operations that required 3,500 nodes in 2009, 2,000 in 2011, and now require only 64 nodes.
With today's advanced visual analytics tools, you can stream data into memory for real-time processing, provide users the ability to explore and manipulate the data, and bring your data to life for the business.
Dynamic data visualizations let analysts and business users interact with the data, changing variables or drilling down into data points, and see results in a flash. Advance your use of data visualization with tools that support features like auto-charting, explanatory pop-ups, and mobile sharing.
No doubt your enterprise is amassing loads of data for fact-based decision-making. Hand in hand with all that data comes big computational requirements. Can traditional IT infrastructure handle the increasing number and complexity of your analytical work? Probably not, which is why you need a backend rethink. Big data calls for a high-performance analytics infrastructure, as Fern Halper, a partner at the IT consulting and research firm, Hurwitz & Associates, discusses here.
Redbox's bright-red DVD kiosks are all but ubiquitous these days, located in more than 28,000 spots across the country. Jayson Tipp, Redbox VP of Analytics and CRM, provides an insider's look at how the company has accomplished its phenomenal nine-year growth.
InterContinental Hotels Group (IHG), a seven-brand global hotelier, has woven analytics into the fabric of its operations. David Schmitt, director of performance strategy and planning, shares IHG's analytics story and his lessons learned.