SECURE CLOUD TRANSFORMATION
THE CIO'S JOURNEY
By Richard Stiennon
Introduction Section 1: Transformation Journey Chapter 1: Mega-Trends Drive Digital Transformation Chapter 2: Moving Applications to the Cloud Chapter 3: From Hub-and-Spoke to Hybrid Networks Chapter 4: Security Transformation Section 2: Practical Considerations Chapter 5: Successfully Deploying Office 365 Chapter 6: A Reference Architecture for Secure Cloud Transformation Chapter 7: Perspectives of Leading Cloud Providers Section 3: The CIO Mandate Chapter 8: The Role of the CIO is Evolving Chapter 9: CIO Journeys Section 4: Getting Started Chapter 10: Creating Business Value Chapter 11: Begin Your Transformation Journey Appendix Contributor Bios Author Bio Read Offline: Open All Chapters in Tabs eBook Free Audiobook HardcoverChapter 9
CIO Journeys
“It’s a new world. Don’t take all of your old approaches with you. You can’t assume that there’s any physical choke point that’s bringing everything back to a corporate enterprise perimeter and analyzing things. You have to understand what a true virtual, software-defined architecture looks like. When you understand that, you can understand how to move to the cloud securely.”
Jim Reavis, Co-founder & CEO, Cloud Security Alliance
From these innovators’ stories a common thread emerges; one of early discovery of how the cloud provides distinct advantages, be they financial or in extending capabilities through greater IT agility, a faster pace of innovation, and lower costs. This is followed by the stage at which these organizations adopted a cloud-first strategy, which means one should always evaluate cloud options first. This mindset opens up the world of cloud transformation and leads to competitive and financial gains for all of the IT visionaries you find in this book. Along the way, these organizations discovered that they needed a cloud-delivered security layer to accomplish their goals.
We’ve compiled three additional cloud transformation journeys in this chapter.
CIO Journey
Great-West Life
Accelerating the Financial Services Sector to the Cloud
Company: | Great-West Life |
Sector: | Financial Services |
Driver: | Philip Armstrong |
Role: | CIO |
Revenue: | $30 billion |
Employees: | 24,000 |
Countries: | 4 |
Company IT Footprint: GWL is the oldest insurance brand in Canada, and it has major subsidiaries in the United States, Ireland, the UK, and Germany. Of its 24,000 global employees, 3,200 are in information technology.
“And technology transformation does not happen in a vacuum. There are cultural and economic changes to consider.”
Philip Armstrong, Chief Information Officer, Great-West Life
Great-West Life (GWL) is a holding company of multiple insurance and financial management firms. In all of its programs, it administers 1.2 trillion dollars in assets. Philip Armstrong is the Chief Information Officer at GWL. In this next journey, he describes GWL’s journey to the cloud.
In the words of Philip Armstrong :
Since joining GWL in 2016, my challenge has been to reinvigorate our brands in the face of changing technology and communications channels to our customers. Every day, we must help our clients use their benefits or their pension programs to realize their financial dreams.
As an established company, we have everything from 1970s-era mainframes, which can’t be beat for cost-effectiveness, to artificial intelligence (AI). So I have to ask: “How do we take that spread of technology and perform open heart surgery to improve it?” And technology transformation does not happen in a vacuum. There are cultural and economic changes to consider.
“How do we take that spread of technology and perform open heart surgery to improve it?”
I have been in technology since I left school. I have worked in 40 countries and have lived through every hype cycle. But despite the hype, cloud is impacting the way we all do business. If you are like me, you are moving parts of your business to the cloud.
At GWL, we are pursuing a hybrid model. We have five data centers and will continue to use them. We will use appropriate cloud providers. In some cases, where cloud does not make sense, workloads are moving back from the cloud. I think of the cloud as a fantastic tool for augmentation.
Modernizing through digital transformation
We are on a journey to refresh our brand. We are looking at robotics, process automation, and AI. We want to provide all of our services in the language our customers choose. We are transforming how people think about our business and how to plan for problems we don’t even know about yet.
In Canada, 50% of our workforce is made up of millennials. Without a doubt, they have a different risk tolerance. Consumers are becoming increasingly tech-savvy. Our entire business is changing, from our base infrastructure to our products and service channels—voice, text, websites, or other means (Alexa, Google Home). New ways to connect with our customers are popping up all over the place.
Customers do not care if they came in over one channel last week and another this week. They expect us to know about those transactions. People now expect a certain level of technology from their providers. At GWL, we have to meet those expectations.
We work with thousands of financial planners and independent agents. We have to support them with technology that is easy to use and does not interfere with their current business processes.
SaaS adoption and beyond
Like most companies, we have gone through stages of transformation. We rapidly transitioned to Salesforce for customer engagement, Concur for expense reporting, and SuccessFactors for human resources. These are discrete functions that are easy to move to the cloud without disrupting our core business.
But there are architectural patterns you need to know about. When the cloud first became popular, some of our business units were excited for all the wrong reasons.
“Another advantage is that large cloud providers have invested in security.”
From the adoption of these discrete applications, we matured over the last two years. We started in the United States by moving workloads to Amazon East and West and Amazon was slow to open data centers in Canada. Our organization did a significant foray into AWS, and we had big decisions to make about whether to invest in data centers and recovery centers or whether our users and needs were the best fit for the cloud. We have been on that journey for about two years.
Why has it taken so long? We wanted to do it right. We went through each of our applications. We hired specialized talent from Silicon Valley. We re-engineered to take advantage of cloud benefits like monitoring and elasticity. And we spent a lot of time getting ready.
We are looking at IaaS for end-of-the-month processing peaks, when we need extreme amounts of compute power. We have found that it is not true that cloud is always cheaper. One lesson learned was that moving to the cloud means you are transporting a lot more data. It’s cheap to bring it in, very expensive to pull it out. It can cost you a fortune in data transfer expense.
Our largest IT suppliers—Cisco, IBM, and Oracle—have been gradually pushing their preferred environment, the cloud. That has a financial impact. It shifts your IT budget from predictable, contracted costs to a subscription model where expenses are variable. That shifts your budgets around. The accounting department complains about how lumpy your spending is. It impacts financial planning. One way we addressed that is by investing in Apptio. It measures usage, so I can cost out the technology in my data center and keep a close eye on who is spending what on cloud resources. And yes, it is cloud-based.
Another advantage is that large cloud providers have invested in security. More often than not, breaches are an internal mistake rather than some flaw in the way cloud infrastructure is architected.
Building the workforce
The shift to the cloud is a big cultural and training disruption, and you have to go through massive education for internal users. It’s important to look at different departments and how people collaborate.
When it comes to moving a large organization, cloud transformation is 70% cultural and 30% technical.
Getting experienced cloud resources is difficult. Most companies are transitioning, are in the cloud, or have a hybrid. When you start to move to the cloud, you need developers that can develop applications in the cloud and you are paying a premium for toolkits, which are changing rapidly. If you are looking for someone with three to four years of in-depth cloud experience, then it is an arms race. You train them, and then they leave for a higher salary.
The other problem is the number of clouds. Spin up a public cloud instance; it drops down to AWS, then to my own data center. Between the clouds, you are actually going into the internet. You have to find people with multi-cloud experience to architect all of that.
You need financial people who can monitor usage, and DevOps people who can extend their knowledge into cloud. You also need cybersecurity people who need a whole host of skill sets. All of these skill sets are very expensive.
We have had a very deliberate strategy of partnering with large tech companies and leverage good professional services arrangements with them.
Should internal applications move to the cloud?
There are two schools of thought on moving workloads to the cloud. The companies that are starting their cloud journey look at obvious workloads. They have progressed to: “I am going to understand the benefits and cherry pick my internal apps to move to the cloud.” Do they need elasticity to support a variable workload? Do they need the availability and easy access? Can they take advantage of the built-in security?
The other school of thought is that legacy applications are going to take a lot of money to move. Is there a hybrid approach that does not require complete rewrites? Keep in mind that if you do the hybrid approach, you will have some users going to the cloud and others who will be routed to your existing data centers. It is still early days in big organizations. Many are not ready to drop the VPN and authentication tokens. It’s getting there, though.
Now we are starting to hear about large companies partnering to drop stacks into your data center. Cisco and Microsoft have partnered to drop an Azure stack in the data center, if I want it in-house, for whatever reason. That allows me to virtually “run it in the cloud.”
I get exasperated when I hear CIOs say they are moving everything to the cloud. I have five mainframes. We view these as so cost effective, I doubt they will ever move to the cloud.
Securing it all
I am a believer in defense in depth, so I will have overlapping security capabilities. We have different types of detonation chambers in the cloud. We are using Zscaler for web traffic and Proofpoint for email. That will filter less sophisticated threats, the everyday burden of a constant flood of attacks. Therefore, volumes of incidents are reduced drastically.
We have very sophisticated appliances for application firewall defenses. As you drill your way into our data center, we use firewalls from multiple vendors. We are shifting to Microsoft Windows 10 and use Active Directory and Intune for mobile device management. We also have a privileged user management system for server access and track alerts in an SIEM (security information and event management). We have a large cybersecurity team, and we are under pressure to deliver on the promise of all this technology by ensuring that security is done right.
“I am a believer in defense in depth, so I will have overlapping security capabilities.”
Partners and financial advisors can be a problem. Some work directly for us, so we can control their desktops and monitor their activity. Then we have people who have a commercial agreement with us, but own their own infrastructure and hardware. What we try to do with them is give them tools that they can access securely via Zscaler Internet Access. You have to look at how they are accessing your applications, data stores, and tools before deciding how to protect those elements.
Complete independents can also sell our products. You have to ensure they come through routes you can secure. What we have found is you cannot stop everything, so you need these multiple levels of defense. You cannot monitor and measure everything, so you have to apply the more sophisticated technology, like AI, so your team can be freed up to focus on the important things.
The bad guys are starting to use AI to package their malware. They want to be able to bypass the sandbox technology everyone is deploying to catch their malware.
Zscaler saw the writing on the wall. The difference with Zscaler is they can inspect traffic inline. Detonation chambers have been around a long time, but they run in a virtual environment, and the sophisticated stuff detects the virtual environment and goes to sleep. Zscaler has built its own environment without the standard virtual machines, so the malware detonates and is detected.
One of the great things about working with cloud vendors is if I get infected by something, and I show the vendors what I have seen, they will learn from it. Then they implement protection in real time into the cloud.
An example of our security working is that we had no issues with WannaCry. We saw some attacks in North America and a couple in Europe. We had already patched for it.
What we are seeing is that we are in pretty good shape to screen out the run-of-the-mill stuff. It’s the very sophisticated stuff that we have to worry about. It gets past your first line of defense, lands in an inbox, and somebody clicks on a link. Either Zscaler gets the link and blocks it, or our endpoint solution sees the unusual behavior and the device is quarantined until it can be cleaned up.
For a lot of companies, the cloud has complicated things. How do you extend your security fabric to multiple clouds? It’s simple: just get a cloud cybersecurity service.
Before our transformation journey, we had a traditional 1970s hub-and-spoke design. Cisco helped us build a leaf-and-spine design—a fully meshed network between access switches and the backbone, many to many—using Cisco Unified Access Data Plane (UADP) switching ASICs. We spent all of 2017 building that. The design they helped us with is complete, and it is already implemented.
But we also recouped costs from all those remote offices that no longer needed the full stack of security appliances. It allowed us to invest in our future model.
Moving forward
It was rather hard to sell the transformation internally. I have been here two years. We were quite a traditional shop and we were happy building a moat around the data center and securing it. But when it comes to talking about different services, we could not build that ourselves, so we saw we needed to use public clouds. As we did research, people’s attitudes came around. The biggest hurdle to overcome was around security.
I spent three days in Redmond at Microsoft doing a deep dive on the Azure architecture. When I came back, my boss asked what I thought of the security of Azure. I told him, “They are more secure than we are.”
We had some savvy board members that encouraged me, and our first forays have been quite positive. We are proceeding with caution. We have an internal checklist for any app we plan to move to the cloud. If we believe there is a good business case, we will do it.
Large financial services are slow and steady. They are risk averse and heavily regulated. They are in the trust business. That comes with the responsibility to think very carefully about the fit of the cloud.
Lessons learned
The very first thing you have to do is take a temperature check of your internal culture. It is normal to be excited about moving things out to the cloud. Vendors will go directly to your internal people, bypassing any oversight you may have. If everyone says, “Yes move everything to the cloud,” I would be equally worried. Ask yourself what is the primary driver. Is it security? Agility? Flexibility?
“You have to communicate as you go through the journey. Celebrate success. Admit mistakes.”
The ice that is under the water—that hidden infrastructure—is quite expensive. It is hard to find people with that knowledge of the architectural patterns.
What not to do
- Try to prevent your internal business users from going directly to cloud suppliers themselves. They can punch a hole in your cyber fabric. They can enter into contracts that leave a nasty cost surprise. They can leave critical digital assets lying around. You have to be the cloud broker.
- Avoid moving things to the cloud simply because you don’t like working with your internal IT people.
- If you move to the cloud and realize it was a mistake, acknowledge that and move it back.
CIO Journey
Fannie Mae
Transforming Critical Financial Infrastructure Behind the U.S. Economy
Company: | Fannie Mae |
Sector: | Financial Services |
Driver: | Bruce Lee |
Role: | CIO |
Revenue: | $110 billion |
Employees: | 7,200 |
Countries: | 1 |
Locations: | 8 |
Company IT Footprint: Fannie Mae has 10,000 people and over 10,000 servers in one or two data centers. It connects to about 2,500 banks and institutions, and to about 40 market providers of data or other types of services. Fannie Mae manages petabytes of data and 400 different applications. The complexity of its systems and infrastructure is not on the order of an international bank, but it is vital—it’s systemically important.
“The point of this journey is to increase the resiliency of the company.”
Bruce Lee, Former Chief Information Officer, Fannie Mae
Fannie Mae is one of the major financial services organizations that underpins the economy of the United States. Bruce Lee, formerly the Chief Information Officer at Fannie Mae, describes the steps the organization took as it transformed its IT practices to a cloud model.
In the words of Bruce Lee:
When we began our digital transformation journey here at Fannie Mae, we took a long hard look at what kind of company we were. We are a Fortune top 25 company with a three trillion dollar balance sheet and 14 billion dollars in profit.
I have been an IT person my whole career. I started off creating trading applications for banks in London. We were the disruptors back then, using PCs instead of mainframes. From there, my career has followed the disruptions in the financial space; in the derivatives world of interest rates and then foreign exchange. I was at HSBC until about 2012, when I got an opportunity at the New York Stock Exchange, and I thought, “Well if you really believe in technology’s power to transform the market, we’re witnessing that with high-speed trading.” So I joined an industry in transition.
I came to Fannie Mae in 2014, when I saw that the mortgage market was transforming. The way that mortgages were created, serviced, and securitized was changing. The mortgage industry was probably the last of the financial services industries to get a real dose of technology transformation. In the past four years, we have been fundamentally rewriting the mortgage industry in the United States from our position as a secondary mortgage provider.
We have both the trading side and a B2B side to manage. The ecosystem is a big platform that does not look dissimilar to an Uber or Airbnb in that our job is to connect excess capacity—the world’s financial capital—to excess demand. We just had to take that platform and renovate it. That’s what we’ve been doing for the last few years.
At Fannie Mae, we are deemed part of the nation’s critical financial infrastructure because we move so much money, we connect so many things. We have an outsized commercial impact, yet we are not that large in terms of people and servers. We manage petabytes of data and 400 different applications.
When it comes to starting down this path of digital transformation, I don’t think CIOs spend enough time answering the questions: “Where are we today? Where do we want to be? And, how do we start the journey from here to there?”
When I joined, we had a lot of software development being done in the classical waterfall approach. We had five separate projects in which we were investing 100 million dollars a year, each. When you look at the track record of such large IT projects, you find there is a 96% failure rate, making them extraordinarily risky.
“I don’t think CIOs spend enough time answering the questions: ‘Where are we today? Where do we want to be? And, how do we start the journey from here to there?’”
We had a lot of departmental Sun boxes running Solaris, which meant a lot of application concentration onto individual servers. Most of our people were no longer programmers, but rather had become vendor managers because the development had been outsourced. We’d lost the core ability to engineer and architect. We’d become captive to our vendors in a very dysfunctional way.
That hard look was the start of our journey. While we found many areas we needed to improve, we found pockets of people who still have that imaginative view of what the future can be. I listened to them, as a new CIO must.
Defining a strategy
The main message was that we needed clarity of direction. We set out and made five bold statements about our IT strategy. One of them was that we would partner more closely with the business and make releases for structural applications every six months. This excited the agile team, but it scared the waterfall guys to death—but it also got everyone to a place in their heads where they said, “We’ve got to go faster.”
Another goal of the new strategy was that we were going to embrace the cloud where it made sense. Stating that helped overcome the objections of the traditional IT forces internally.
A third objective was to build a team internally that could power our digital transformation by being core to the business. That means acquiring the skills, bringing in talent, pushing our vendors and outsourcers further away from us, and internalizing more of the work. This is a typical arc you will see in most agile digital transformations: you have to own more of the people yourself and you have to be more self-sufficient in software engineering and design. We have that as a goal.
We put another goal first that we called “fixing the foundations.” That meant putting in place fundamental security and architectures that recognized how critical data was. At the end of the day, data is what matters most to us. Beyond cloud, beyond security, beyond everything. Who has that data? How accurate is it? And how can it be relied on? What intrinsic value does it have that a company like Fannie Mae can stand behind? These foundational improvements included partnering and agile delivery. It was adopting the cloud. It was sorting out the data and building the team to do it all.
Adopting SaaS
Moving to Salesforce led to interesting conversations with our business. The business wanted better Customer Relationship Management (CRM) tooling, but they were looking at it through what I call an old-world view. We pushed them to realize that they did not want a better tool for creating tickler notices to call a customer—what they really needed was a customer engagement tool. You want an environment where interested customers can find what they need on their own. You want to create a world that allows our own internal data view to intersect with the customer’s view of themselves.
These conversations occurred when we were executing on our objective of IT partnering with the business. We evaluated other tools but ultimately went with Salesforce for CRM. We adopted Salesforce and immediately had to learn a valuable lesson, to learn to resist the temptation to over-customize SaaS tools—to try to make them fit our old way of thinking. We even had a group that redesigned the way the Salesforce interface worked until the end users asked, “Why does it scroll back and forth instead of up and down?” We learned to abandon that stuff and work with what Salesforce delivered.
Migrating internal applications
As we embraced the cloud more generally, we had to look at how to effectively move our own applications from our data centers to AWS. One of the challenges people have is that they underestimate a long-held core corporate tenet: that the infrastructure will be perfect, and applications can be written with that assumption. We will provide highly available clusters, and automatic disk mirroring, monitoring, and redundancy at the hypervisor level. We’ll have transaction integrity maintained on the database backend, and the VMs will never go down. Because of that, application developers did not have to code resiliency into their applications.
In my experience, your approach to the cloud has to be that something can go wrong. VMs are easier to move around. They need to automatically recover. You have to worry about the state of your data and the multiple states it can be in. Basically, you have to think about all aspects of not having a perfect infrastructure. You can’t rely on speed, for instance, as it will vary. In the internal corporate network, you spent a lot of time tuning everything to make sure that a transaction will never take more than 100 milliseconds. Because of that, you could guarantee what throughput would look like or know that two updates will be done close enough together and you won’t have a data integrity problem.
With cloud, you can’t take any of that for granted. You have to program for what it is and what happens if it slows down. The Intel updates for the Specter and Meltdown bugs that Amazon rolled out are a great example. Everything slowed down, and you had to adjust for that.
The interplay between hardware and software is much more loosely coupled in the cloud. That’s what developers have to program for.
During 18 months of “test and learn”, we tried to take a lot of corporate standards and design principles into AWS, and it was a disaster. We had to regroup after 18 months and implement a native cloud model rather than try to duplicate what we had in the data center.
We had anticipated this learning curve when we started, but we still had to go through it so people’s hearts and minds would come over. They had to experience why it was difficult, why their paradigm doesn’t work, and why they had to learn a new one.
I think of cloud migration for applications in the following ways. Corporate applications like HR systems and payroll systems should go to the cloud. Get the things that are not core to your business into the cloud. It’s painful if it slows down, but it’s not a problem. Then you have the other end of the spectrum, which is highly variable compute. Lots of compute, lots of data, but highly variable loads. That’s another use case where you should definitely go to the cloud.
The challenge is the migration of your core transactional systems, your legacy Sun Solaris, Oracle transaction flows, the things that go up and down and are interlinked end to end across the whole company. There may be as many as 40 to 50 applications in a single business value chain. Decomposing that so the pieces can move to the cloud and be programmed in such a way that their variable performance does not impact your SLAs is the key. We are only now getting to see just how hard that is.
On top of creating a DevOps ecosystem, you have to figure out support. Your cloud infrastructure providers may not call you for two hours when they have a problem. In the meantime, you race around with your own troubleshooting only to discover the glitch was on their end.
Transforming the network and security infrastructures
When we started our cloud transformation, we performed a network hop analysis. This application spoke to this data set internally by making several hops. It would go from the application stack through two global load balancers and down to the storage arrays and find its data. It was a two- or three-hop technical journey from the view of the execution memory to the data we needed. When we put the same data set in AWS, because it was going to be used for analytics, we discovered we increased the number of network hops by a factor of three, to nine.
As you move applications to the cloud, you have to be aware of the network paths the data is going to take. You may leave your data where it is in the data center, but you have to be smart about all the hops it takes to get to it. You will invariably be adding layers and hops. You have to engineer that carefully. That may mean doubling down on the quality and depth of the networking team in your organization.
We embraced security as a journey that ran in parallel with our exploration of our applications, data, and transaction processing. We had phases of what we would allow along the journey and what we could support. And then we looked at what comes next, and what comes after that. We were fortunate in that the CISOs were of the mind to “make it work.” We did not experience that typical battle with the security folks. They took the time to learn the AWS security stack, learn the way it works, and in many cases implement what was needed. We have assets in AWS, but we still connect back to our data centers and then out through our security stack before getting to the internet. The next step of our evolution is to put that security stack in Amazon, as well, to allow direct connections from inside Amazon to other places.
Establishing local breakouts
We are on the verge of pulling the trigger to allow our 10,000 employees to go directly to the internet from wherever they are. We use Zscaler Internet Access (ZIA) for that. The driver is Office 365. When you make that shift to Office 365, while still backhauling everyone’s traffic to HQ, the bandwidth usage goes through the roof.
The point of this journey is to increase the resiliency of the company. Moving to Office 365 means that if we have issues with our systems, our employees can still get to email and SharePoint. Because of that, we went with Azure’s hosted Active Directory, removing one more thing that could fail. If people cannot authenticate, they would not be able to get to Office 365.
Things to avoid
- Avoid saying it will be quick and easy to move to the cloud. It won’t be. Just tell people it is much harder than they think.
- Try to avoid contention between developers and infrastructure people. Developers tend to jump to the cloud due to impatience with the controls in place. They don’t want to wait for a server to be provisioned. They try to make the case that using the cloud is just easy.
We had to fight a lot of that at the beginning. It’s natural for the developers to want to avoid working within the constraints of IT, but it always comes back to haunt you. Eventually, they have to interact with you, the security people, the data team, and network people. The myth that cloud is what drives developer productivity falls apart when you try to run anything in production for real; and run it at sustained levels; and have monitoring; and make sure it has the right backups; and that the resiliency is in place and that you’ve tested it; and the network doesn’t get crowded out by something else.
To me, it means you are just shifting the pain to a different part of the organization: off the developers and onto their infrastructure colleagues. Developers, infrastructure, and security all have to be on the same page from the beginning.
There is no shortcut when you are building the hard stuff—the things that address real business and customer problems require integration across silos.
Things to do
- The main point is to realize that your cloud migration journey, like your digital migration journey, is going to be multifaceted. You should not think of it as one thing. You have your pure SaaS projects, your platforms like Salesforce and ServiceNow, and you have your office automation shelf like Office 365 and SharePoint. You can progress on one thing separate from the others. Moving Exchange to the cloud is a lot easier than moving a mortgage underwriting system to the cloud. One took nine months to organize, the other is going to take five years to complete.
- Be precise in language usage. Infrastructure as a service is different than platform as a service versus software as a service. They are all very different with different paths to success. Be especially careful with SharePoint. Customizations spread like wildfire and make it hard to migrate to the cloud. If I had to do it all over again I would have killed off SharePoint internally first.
- The legal team has to adjust too. They have to understand that the nature of the vendor-customer relationship has changed. They comb through contracts looking to customize them to the company’s benefit. But cloud contracts are one size fits all. The provider cannot modify them for each customer. Which brings up an important insight I had.
Cloud introduces standards
A big “aha” moment for me was when I thought about the fact that we know intrinsically that systems—trade systems, cargo systems, any systems—all work better when you have standards. Think of railroads and standard gages. We know standards are good.
In corporate life, though, standards have become associated with avoiding the negative—security standards to prevent you from doing something stupid; database standards so you don’t do anything stupid. They’re seen as governance hurdles that limit creativity. Standards are somehow burdensome and bad. The beauty of the cloud is that it has managed to make standards sort of sexy, make them good things, because they free developers from the whole infrastructure nightmare. If your own infrastructure team tried to impose a whole bunch of standards on developers, they would hate it.
“The beauty of the cloud is that it has managed to make standards sort of sexy, make them good things, because they free developers from the whole infrastructure nightmare.”
Developers are OK with just ticking boxes on AWS when they set up a VM. They don’t think of them as standards; they don’t fret over the fact that there are only three choices of configuration. They forget that they used to specify hundreds of different configurations on the corporate side, insisting that their application is special, it’s different; they need this, they need that. In the cloud, they are perfectly happy with limited choices and just tick which ones they want.
Somehow the cloud has managed to make standards acceptable. The cloud is not customized, it is not bespoke. It’s a very standard environment. I used to have 38 flavors of data replication in one data center at Fannie Mae; 38 ways that application teams had to decide their applications would move data from their primary system to their backup system. 38 of them. We had to close the data center to get down to ten ways, and now we have another big project underway to get that number down to two.
This move to standardization is a good thing for the industry.
CIO Journey
PulteGroup
Building a Home in the Cloud
Company: | PulteGroup |
Sector: | Home Construction |
Driver: | Joe Drouin |
Role: | CIO |
Revenue: | $7.6 billion |
Employees: | 4,000 |
Countries: | 1 |
Locations: | 700 |
Company IT Footprint: Currently at Pulte, there are almost 5,000 employees. Pulte serves approximately 35 national markets across the United States, and have 600 to 700 active communities and 36 divisional offices around the country. Its IT footprint is around 5,000 desktop endpoints.
“If I were to advise a company in a similar position, I would say it helps to adopt a cloud-first mentality. Assume that everything is going to the cloud as a rule, and really challenge the exceptions. I don’t think it’s bold anymore; it’s what you have to do today, and in my mind, it’s proven.”
Joe Drouin, Chief Information Officer, PulteGroup
Another cloud transformation story is from Joe Drouin, who is currently the Chief Information Officer at PulteGroup. He has overseen transformations at three separate companies: TRW, Kelly Services, and most recently the PulteGroup. Pulte is one of the largest home construction companies in the United States.
In the words of Joe Drouin:
Evaluating our IT footprint
I’d been part of some fascinating IT transformations in the past, but this was certainly the most challenging. We had some legacy technology, systems, and applications that didn’t support the business anymore. I was able to lean on my prior transformational experiences at TRW and Kelly to do the same kind of thing at Pulte.
In 2015 we started focusing on what we could do around our now 12-year-old application footprint. By 2016, we were ready to hit the accelerator. We spent all of 2017 laying out the roadmaps and our investment plans, building a fundamentally new architecture—a very cloud-centered architecture—and getting everything lined up for when the flow of investment kicked back in. As we entered 2018, we built out the underlying foundation and new architecture—our “enterprise data hub,” a platform for integration that broke us out of our legacy environment of a point-to-point, 20-year-old, accidental architecture, to a more deliberate, modular, loosely coupled, API-centered one with a strong footprint in the cloud.
Pulte and the cloud
When I got to Pulte, everything was built on-premises. We had a data center in our office in Arizona. Almost everything was built or bought and housed in that data center. We had a traditional hub-and-spoke network with everything pointing back to that data center. Soon thereafter, we were running out of space, were at capacity, and had to add space and power and cooling. The cloud was tried-and-true for me, having had much success with cloud platforms and SaaS at Kelly Services, so we started moving to it in earnest.
Beginning our cloud journey with Office 365 and local internet breakouts
We rolled out Office 365 and got off on-premises Exchange. Early on, we started purchasing SaaS solutions and slowly but surely moved more and more of our footprint out of the data center and into the cloud, which meant that we had to change the traditional hub-and-spoke model of the network. That’s when we brought in Zscaler to help. I was familiar with Zscaler from my time at Kelly, and felt like Pulte was not a dissimilar model. We have lots of small locations that are constantly opening and closing.
“We started putting local internet into those locations, so we didn’t have to backhaul all our traffic to the data center in Arizona”
We started putting local internet into those locations, so we didn’t have to backhaul all our traffic to the data center in Arizona. Zscaler provided us the ability to do that and put all the security provisions in place that we needed. We moved more toward a hybrid design—we still sometimes have pipes back to the data center, but every location has connectivity via a local internet provider. This helped give us flexibility, but importantly it also reduced the delays we often experienced waiting for business-class service to be brought out to residential areas in far-flung suburbs, where often getting direct circuits took ages.
A hybrid cloud environment
As more and more of our capabilities are hosted in the cloud, it is important to be able to route traffic locally where it needs to go and back to the data center when needed. We still host our legacy ERP in the data center. We’re currently deploying new and updated applications to Microsoft Azure. All our custom applications were built on .NET and SQL, so from the server OS to the database, all the way up through the development stack and to the desktop, we’re a Microsoft environment. As such, Azure was a natural place for us to focus.
Costs will go up short term
There is an education process for IT and the whole business as you move into the cloud. One maybe not-so-obvious thing is there is often not a direct cost savings. During the transition stage, we are putting things in the cloud and paying by the drink but at the same time, we can’t just turn the data center off. You can’t shut down enough equipment in the data center fast enough to offset the cost of moving. Ultimately, the economics of it will pay off, but for a time we’re carrying costs for our data center and we’re incurring new costs. The idea of pay-per-use in the cloud is a great one. The idea that you can turn the dial up and down sounds great, though in my experience the dial only seems to go up.
End state: flexibility
I see us three or four years from now with a much more flexible IT environment, one that sits mainly in Microsoft’s cloud but that would be containerized to the point that if we decided to spread the love a little and move some things out of Azure, it wouldn’t be a problem. We will have this modular, plug-and-play architecture that will give us tremendous flexibility. In this scenario, we will have applications that can be plucked out and replaced much more easily than trying to replace a big, monolithic, three-year software development project.
I think slowly but surely we will get to a point where there’s very little on-premises technology. At the point we are ready to entertain the notion of replacing our finance system, I would certainly be looking for a cloud-based system.
Chapter 9 Takeaways
As the three CIOs in this chapter have shared, moving to the cloud has enabled them to move faster than they had previously imagined. It has brought their organizations flexibility and agility, and in some cases has allowed them to try things on a small scale before committing heavy resources or to prolonged timelines.
In summary, some key takeaways from these leaders are:
- Adopt a cloud-first mentality. Assume that everything is going to the cloud as a rule, and really challenge the exceptions. It’s not bold anymore; it’s what you have to do today, and it’s proven.
- Ensure that the entire company is committed to the cloud-first strategy. This can be done incrementally by carving out projects that result in early wins. It can also be an all-in effort where top management recognizes the advantage of moving quickly, often driven by competitive pressures but also by customer demands.
- Plan application migration early. Create a blueprint for lift-and-shift, partial refactoring, and full refactoring of every application.