General Electric Company
Powering Global IT Transformation
Company IT Footprint: General Electric is a global name and has been an icon of technology innovation for well over a century. At the time of writing, there were about 9,000 IT employees and another 15,000 contractors at GE. It maintained an application portfolio of around 8,000 applications distributed across 45,000 compute nodes. Its IT infrastructure was spread across 300,000 employees who sat in 170 countries around the world.
“We had a target to take a billion dollars of technology costs out of our own operations. And we did it! By the end of 2017, we exceeded the billion dollars of productivity, and we figured out how to deliver three billion dollars’ worth of productivity within the same time frame.”
Jim Fowler, Former Chief Information Officer, General Electric Company
General Electric Company (GE) embraced cloud transformation to improve employee productivity while enabling growth, to reduce the complexity and associated costs of its infrastructure, and to ultimately improve the security posture for mobile users. Here is the story of how GE executed its cloud transformation across 8000 global locations.
In the words of Jim Fowler:
I was the group CIO for General Electric until late 2018 and have been with GE for 18 years, having worked in every one of our businesses (apart from our healthcare business) in that time. I’ve done everything from being a systems administrator to a Six Sigma Black Belt focused on process excellence and improvement. Around 2015, I was approached for this CIO position where they said, “Hey, we’re entering into a time of digital transformation. How we run as a company is going to change drastically as we become technically focused. We’d like you to take this job.” This was a significant milestone for us as a company, as none of my predecessors came from IT. They came either from finance or business development. So, I’m the first CIO who has come up from technology inside GE.
The launch of the GE digital software business
Seven years ago, at a request from our CEO, we started looking at what it would take to create a digital software business inside GE. We weren’t thinking about GE internally, but instead about creating software to help our customers get more out of the assets that they run. As we went down that path, we started conceptualizing what a GE software business would look like, and we soon realized, You know what? If we’re going to look at how we help our customers be more digitally integrated into the systems that they run, we better think about how we do the same thing inside our own four walls. And so, we set a target for ourselves: we would find a way to drive out a billion dollars, cumulative, of productivity cost by using technology that we apply inside GE.
We had a target to take a billion dollars of technology costs out of our own operations. And we did it! By the end of 2017, we exceeded the billion dollars of productivity, and we figured out how to deliver three billion dollars’ worth of productivity within the same time frame. We became our own best example of what good digital industry looks like.
A new focus on cloud
The transition to the cloud became important as we realized it was going to take an investment to replicate what we had on-premises in the cloud, and our existing application infrastructure needed to be a lot nimbler. We needed to be able to make changes quickly and spend our resources on application content development rather than infrastructure work like storage and servers.
It started with data center transformation
At that point we had seven data centers around the world that we were housing with our own infrastructure, so we thought: What if we could take the majority of the resources that were in those data centers and get those to be cloud-based? We realized that using the cloud would help us free up the manpower that we needed to drive this new transformational idea.
The cloud strategy we developed was about taking out our own infrastructure from the data center perspective and building upon the idea of reuse versus reinvention. In a cloud-based world, there is a common code set and a common set of software. So we used shared micro-services and common components that allowed us to build applications faster.
In the end, we realized it wasn’t just about freeing up resources. It was about increasing the velocity with which we could build our own code. For every dollar we spend in this “world of the cloud,” we reuse what we already have, and we see a three- to four-dollar greater return. And that is why our evolution into the cloud has become so important to us.
Next, we needed to transform our workforce
Shifting to the cloud meant replacing our outsourced contractors since we had given up a lot of our expertise in hands-on technology over to third parties. We knew that had to change, so for the last two years, 95% of our new hires have been in entry-level positions. In the last twelve months alone, we’ve brought in about 1,500 people in a range of new positions—building code, system administration, database administration, and cloud architecture.
Next, we had to focus on our project managers. They knew how to manage outsourced labor, but they needed development on how to run a product. So, we had to build on their product management skills within the current generation of project managers. They had to understand not just how to run a project plan, but also how to manage product development—pricing a product, understanding cost, and how to make investment decisions on features and functions. It was a big transition for them to learn ways to determine output from the company’s perspective.
We established guard rails to support innovation
Culturally, I would say one of the hardest things to adapt to has been this idea of reuse versus reinvent. We have a 125-year history full of strong innovation and smart engineers. That culture manifests itself as employees trying to reinvent what somebody else has already done because they think they can do it better.
What they don’t realize is that this approach inevitably slows progress, because it takes longer to get to a final solution. We’re trying to focus on improving ideas, rather than inventing new ones or reinventing old ones.
In the past year, we’ve tried to encourage this new focus by implementing what we call a set of guardrails. The guardrails set a minimum standard by which all our developers must operate. But within those guardrails, we welcome and encourage innovation. We like for our people to find newer and better ways to use technology. But when they want to go outside of those guardrails, it requires our chief architects to make an architectural decision to change the guardrails.
Moving towards data protection based on risk tolerance
Changing our network infrastructure was not as hard as we anticipated. We already had a complex network structure because we have over 8,000 different locations that we support. So, instead, we had to focus on data. How do we think about the value of data or the risk of data loss or data manipulation? The answer: we built a data infrastructure on top of the network that protects us from a risk perspective.
For example, we think one percent of our data represents 80% of the risk to the company, and that data sits inside a super-controlled vault of information that is separate from what we consider the rest of the GE network.
We have different classifications of data that determine the level of the network data can reside on from a risk perspective. And once we had that laid out, the networking was really just about a physical design that fit those data requirements. And so, what I always tell people is, don’t worry so much about designing the network first. Design your risk tolerance, first as it relates to the loss or manipulation of data, and that will allow you to define how you think about the network in what is going to be a hybrid cloud environment.
Moving from a hub-and-spoke to a local internet breakout architecture
We have different types of networks. You’ll find our large sites are still using hub-and-spoke networks that come back into a core network architecture that allows connectivity both inside the GE network and out to the GE cloud.
In our smaller locations, we’re disconnecting them from the MPLS networks, and we’re using local internet providers, like Time Warner or Xfinity in the United States, or Orange in Europe, to provide connectivity to the internet. This allows those small sites to be internet-connected back into the GE world, whether that be a cloud-based solution or an internal GE network.
We think of our smaller sites the same way we think about a home office, where it’s internet-connected, and we provide functionality they need. This way the smaller sites can have a secure connection for data and we can manage GE data that sits on the devices in those remote locations rather than try to think about implementing security around the distributed network.
Security is about data, not the network
Our security starts and ends with the definition of the data. We have a strong understanding of the regulatory requirements around how that data is managed. Then we build the enterprise architecture or the physical architecture around that data. So, we design security in from the very beginning of a project based on those discussions. We don’t think about security as a set of requirements or checkboxes. We think about security as features that we designed into the product based on decisions around the data.
We also set guidelines on the secure software development lifecycle that require product managers to include security features based on the data requirements and the transactional data that sits in those systems. Then we build an infrastructure around being able to watch and manage how that data moves over time, based on its criticality.
On the network or off: blurring the lines
Our last CTO Larry Biagini predicted that the lines between what’s inside the GE network and outside the GE network would blur, making it harder to decipher what is inside the corporate network and what is outside the network. Zscaler’s cloud security platform provided us a way to think about how to control data in transit in a world where we didn’t control the network.
“In the past, we were big VPN users, but have decreased our VPN usage by almost 90% since we started this project.”
As a result, we don’t run traditional VPN inside GE anymore. We have a custom-built application, which is built on top of some of the Zscaler connectivity that runs on every device. When you connect to any network anywhere in the world, it determines a) are you on something we control or not and b) does your PC have the level of controls on it that we need to protect our data? If not, let’s put them there, and if yes, then I’m going to give you access to a certain level of information inside the organization.
What’s behind all that is a Zscaler infrastructure that helps us not worry about where somebody might show up to work one day, and it creates that ubiquitous connection between the GE infrastructure and the end user’s PC, while enforcing controls. You can control everything from traditional proxy blocking to starting to build intelligence that says, when Jim Fowler shows up on a PC and is sitting in Atlanta, Georgia, giving him access to these network resources makes sense. But when Jim Fowler shows up with his PC in some country we have concerns with, we’re going to have to restrict his access a little bit more. We’re not necessarily going to give him access, and we’re going to monitor more closely what he’s doing on that device today. And so Zscaler fills a lot of different boxes—it’s not just your traditional proxy provider, but a next-gen network security provider for us. It allows us to manage this extended network of devices that sits out in the GE infrastructure.
In the past, we were big VPN users, but have decreased our VPN usage by almost 90% since we started this project. We are connecting about 3,000 of our smaller locations through a local internet provider versus a high-cost MPLS or dedicated network as we would have had in the past.
In data centers and large locations, we still have dedicated infrastructure, but it’s that small office-type location that we think about very differently than we did ten years ago.
Realizing cost savings and performance improvement
Depending on which country the site is in, we see savings from 30% to 75% in infrastructure costs. This comes from being able to leverage more ubiquitous forms of connecting to the internet versus having dedicated lines and firewalls and routers and switches in those locations.
One of the advantages that we weren’t looking for, but that we gained, was in performance. In the old world, every transaction routed back through a central data center and then was sent back out to the receiving system. In the new world, when we’re connecting via the internet, all of a sudden that network performance goes up. So, wherever we’re using cloud-based solutions, we’ve seen performance improvements with a lot of our transactional applications—as high as 70% improvement in transaction times.
“Wherever we’re using cloud-based solutions, we’ve seen performance improvements.”
When you break down the barriers of everything having to sit in your own network in your own data center it becomes a lot easier to have conversations with customers. About not just how you do that, but also about making the data flow in a secure fashion, so that from a point-to-point perspective, the data is completely secure. Between two different companies, we could actually share data in a more meaningful way than we had in the past. And so, I’d say that lower cost and improved performance were the two big things that we experienced as we went on this journey.