SECURE CLOUD TRANSFORMATION
THE CIO'S JOURNEY
By Richard Stiennon
Introduction Section 1: Transformation Journey Chapter 1: Mega-Trends Drive Digital Transformation Chapter 2: Moving Applications to the Cloud Chapter 3: From Hub-and-Spoke to Hybrid Networks Chapter 4: Security Transformation Section 2: Practical Considerations Chapter 5: Successfully Deploying Office 365 Chapter 6: A Reference Architecture for Secure Cloud Transformation Chapter 7: Perspectives of Leading Cloud Providers Section 3: The CIO Mandate Chapter 8: The Role of the CIO is Evolving Chapter 9: CIO Journeys Section 4: Getting Started Chapter 10: Creating Business Value Chapter 11: Begin Your Transformation Journey Appendix Contributor Bios Author Bio Read Offline: Open All Chapters in Tabs eBook Free Audiobook HardcoverChapter 2
Moving Applications to the Cloud
“For every dollar we spend in this “world of the cloud,” we reuse what we already have, and we see a three- to four-dollar greater return. And that is why our evolution into the cloud has become so important to us.”
Jim Fowler, Chief Information Officer, General Electric
It Starts with Application Transformation
Cloud transformation is already underway at every organization, often without the knowledge of the IT staff. Cloud and mobility are changing how, where, and when we work. Today, platforms like LinkedIn for professional development, Workday for human resources, ServiceNow for customer support, NetSuite for ERP, and Slack, Yammer, and Skype for collaboration have become the business productivity tools of choice. And, of course, many of your users spend inordinate amounts of time on social media platforms like Twitter, Instagram, and Facebook. Meanwhile, streaming music and video sites drive up bandwidth consumption.
Transitioning to SaaS
The move to the cloud typically comes in stages. The first stage is invariably the use of critical business applications that are hosted in the cloud—software as a service (SaaS), the best example of which is Salesforce. The advantages of SaaS for customers are apparent: minimal capital outlay, no annual maintenance fees, consistent support, easy self-service, and continuous improvement as bugs are fixed and features added with zero friction. Open the application on a Monday morning and there could be dozens of new capabilities added since the previous week, all requiring no testing or upgrade cycles for the customer to schedule.
The most dramatic change impacting organizations today is the rapid move to Office 365, Microsoft’s SaaS offering for its office productivity applications. Gone are the days of managing Exchange servers with their SQL backends and high-availability redundancies. Storage with OneDrive and collaboration with SharePoint, all tied to one easy-to-manage Active Directory instance, complete the offering.
Moving Internal Applications to the Public Cloud
SaaS applications are just the first step in a cloud journey. What are organizations doing about internally developed applications? Most organizations maintain hundreds if not thousands of their own applications. This is the next phase of the cloud journey: moving those internal applications to a cloud environment, whether it is on the public cloud such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud, or in a private cloud in the corporate data center.
Cloud service providers are facilitating this transition by providing more tools and resources to help organizations re-host and re-factor their applications on their infrastructure through platform as a service (PaaS). The rapid adoption of PaaS is growing as reflected in recent reported public cloud revenue growth.2
Leveraging Private Cloud
While public cloud adoption is growing at a fast pace, some companies are choosing to put their heavily regulated data in the private cloud due to stringent regulatory or government requirements. These options are becoming widely adopted with various vendors such as VMware offering hosted and on-premises options. AWS GovCloud and Azure Government are some examples of highly regulated and air-gapped cloud environments built to help companies take advantage of the cloud while maintaining all the security and compliance requirements that were previously available on-premises.3
Enabling Application Transformation
Whether IT leaders like it or not, the business starts to use these applications and services even before IT can have a say or impose controls. Often the complaint is that IT processes for incorporating new capabilities are too slow. A marketing team can launch a website, contract with a lead funnel management solution like Marketo, and plug into Salesforce without consulting the IT department. This leads to a plethora of data stores, multiple credentialing systems—each a privacy breach waiting to happen—and all without appropriate controls.
The same arguments are heard when employees branch out beyond consuming SaaS applications and start to build applications in the cloud. This so-called shadow IT is often attributed to rogue actors but, in reality, it is natural for a team to want the ease of deployment of spinning up compute resources on AWS or Azure without all of the constraints that the IT department imposes for specifying, purchasing, configuring, and maintaining standalone servers in the data center.
Protecting this access poses additional challenges because now the most critical data and services are being exposed. While a traditional VPN may provide the needed access controls, it is complicated and expensive to maintain and is often one of the top headaches of the IT security team.
Larry Biagini, former CIO of GE, declares, that “It is not for IT to say ‘no,’ but to support the needs of the business.” And the users have spoken. They have initiated a move to the cloud that is transformative. The cloud is replacing the data center for application hosting and will eventually replace it for even the most critical transactions.
As we will see in the following chapters, every cloud journey includes the three-part question about what to do with applications as outlined in Figure 2.2: Should I lift and shift, do partial refactoring, or complete refactoring? Some applications may move to a hybrid model, in which the front end is hosted in the cloud while the backend transaction processing and the data remains in the data center. IT has to understand which applications are right to be migrated to the cloud based on the following three approaches:
Lift and Shift. This approach is for those applications that can be easily moved to the cloud with no modifications. Anything that was already delivered via the web internally can be moved to the cloud. This could include the company directory, HR communications, and help-desk contact forms.
Partial Refactoring. In this case, there may be some minor changes required to move an application to the cloud. Perhaps the backend database has to be standardized or changed. Hardcoded destinations such as IP addresses ideally should be made variable. The same goes for access controls that may be embedded in the application code. Once moved to the cloud, these cannot risk being exposed.
Refactoring. Here, the organization decides to completely rewrite the application. It may call for a new development environment, new coding languages, and new ways to think about applications that can be exposed to partners, customers, and employees, no matter where they are in the world.
Cloud transformation requires a shift in thinking and often implies a change in the makeup of your IT staff. On the one hand, cloud transformation obviates much of the need for networking, system administration, and data center operations. On the other hand, development resources have to be re-trained to think about how applications are developed, delivered, and maintained as highlighted in Figure 2.3.
CDO Journey
Schneider Electric
SaaS as the catalyst for energy management leader’s cloud transformation journey
Company: | Schneider Electric |
Sector: | Listed Company |
Driver: | Hervé Coureil |
Role: | Chief Digital Officer |
Revenue: | $32 billion |
Employees: | 144,000 |
Countries: | 100+ |
Locations: | 290 |
Company IT Footprint: Schneider Electric is a French multinational corporation that specializes in energy management and automation solutions, spanning hardware, software, and services. Schneider’s IT footprint spans over 100,000 connected users in 100 countries.
“Many companies look at “cloud-first” without assessing the network changes this entails. When we started to adopt cloud-delivered applications, we had to understand how our network architecture would be impacted by the cloud. There is a pretty significant network transformation required.”
Hervé Coureil, Chief Digital Officer, Schneider Electric
Schneider Electric is one of the largest industrial equipment manufacturers in the world. For Schneider, the move to cloud was precipitated by SaaS as an early customer of Salesforce, and it became a global initiative. Schneider characterizes digital transformation as an initiative that goes beyond technology to encompass its customer experience, user experience, and business in general.
Hervé Coureil, Chief Digital Officer of Schneider Electric, describes his organization’s cloud transformation journey.
In the words of Hervé Coureil:
At Schneider Electric, our cloud journey began with the move to Salesforce. It became a global initiative that succeeded, and we leveraged that success for everything that came after.
I have been with the company for quite a long time. I started in finance and did a lot of M&A work. When we acquired APC in 2007, I was sent there to drive the merger integration with the title CFO. It was an opportunity to see what happens after the M&A, instead of just orchestrating the deal.
During that time, I realized that information technology was on the critical path to drive business convergence and integration. I also developed a keen interest in security. Schneider at that time had started a program to integrate IT across all of its businesses. The company decided to invest in technology and created the position of global CIO, which fell to me. The CIOs from all of the countries would report to this position. Soon after that, digital transformation became a super-hot topic.
Digital transformation goes well beyond technology to our customer experience, our user experience, and business in general. Last year, we created a digital team that would help us in that digital transformation journey. We took into account sales support, automation, and other projects.
That’s why I moved to the role of Chief Digital Officer. It’s quite a large team, including a new global CIO who reports directly to me.
The three stages of cloud transformation
The cloud is an enabler from a number of perspectives. It was not completely linear, but there were three distinct stages.
Stage 1: Started with software as a service. Schneider was an early customer of Salesforce. We saw SaaS as a way to enable our transformation. Leveraging SaaS also made a lot of sense for bringing together organizations as a result of many acquisitions. One of the gains was in speed of deployment.
Stage 2: We looked at cloud as a way to transform our infrastructure. Transforming the network is required to take advantage of the cloud.
Stage 3: Involved the cloud and the internet of things to provide new services to our customers. We could not do that without the cloud and mobility—the two mega-trends.
Wide-usage of SaaS applications
We just finalized a major undertaking to move to Office 365 backed by Box for file sharing and storage.
It’s difficult to quantify how many sanctioned SaaS applications we have. I would estimate somewhere between 50 and 100. Counting the number of applications is a very common problem. We also took another look at our toolsets for monitoring the applications used in our network. Now we use Zscaler to monitor and notify us of application usage.
Our SaaS applications are segmented into three categories:
1. Internal applications that are connected to single sign-on and managed by us.
2. Applications that we might get alerts on—things like who is using them and how much.
3. Applications that we ban and block.
The migration of internal applications
We used to be a Lotus Notes user. Over the years, we had developed thousands of custom applications for Lotus. One of the big things we are doing in migrating to Office 365 is that we are working on moving as many of those Lotus functions as possible. We had a governance issue at one point, and it was impossible to know how all those applications were being used and what data they were using, and we tried to retire any application that was not needed anymore. We also looked at every application that was developed that could be used in the existing landscape. We had quite a few applications that had been developed on Lotus Notes that would be better served by Salesforce, so we migrated them. In the case of no existing application, we are developing them natively in the cloud. Our partners are instrumental in making that happen.
We do a little bit of both internal and external application development. We are relying on partners but some applications are developed in-house. One of the big challenges is that many of our applications were deployed ten years ago, and the people who developed them are no longer with the organization. Some of the applications had been developed by citizen developers—people who were not even part of the IT organization. There is very limited tribal knowledge remaining for some of the things that were in use. That meant we had to engage in a little digital archeology exercise to reverse engineer the applications and re-develop them for the cloud.
Framework and controls to build right applications for the cloud
We are aware that without careful planning moving to the cloud can pose new challenges. Our goal is to create local environments, so people can develop workflows and simple applications. Rather than slow things down by banning these quick and effective developments, we want to create an environment that is supportive of them.
On the one hand, we want to enable the development of applications, but at the same time, we do not want to create more technical debt. We strive for an empowerment framework. We want everyone to be able to build what they need. So we have two control points:
1. Go through the main portal to determine if we already have a suitable application. It’s a very simple process to search and discover apps. The internal customer should make sure the application was not developed somewhere else. In one case, we had a request come through and quickly determined that a team in Italy had already developed something that met the need.
2. Downstream, the second control is an internal privacy and security certification. We want to make sure that we are dotting the i’s and crossing the t’s when it comes to security. So we vet the applications to ensure they do not introduce a privacy issue, perhaps by collecting data, or open up a security issue.
While it is not written in stone, we have a high-level philosophy of all new applications being built for the cloud.
Our network transformation: MPLS to direct connection to cloud
Many companies look at “cloud-first” without assessing the network changes this entails. When we started to adopt cloud-delivered applications, we had to understand how our network architecture would be impacted by the cloud. There is a pretty significant network transformation required. First, we looked at the architecture: MPLS and the number of network access points versus direct connections to cloud providers.
“When we started to adopt cloud-delivered applications, we had to understand how our network architecture would be impacted by the cloud.“
The second thing that’s relevant from a network standpoint is the security of local internet breakouts from each office. That is where we invested in Zscaler.
We have more local breakouts than we used to have. Before the cloud, internet access was a second-class citizen. After the cloud, it becomes a critical element of our network usage.
We used to have firewalls and numerous other hardware appliances, but now we have a cloud-first strategy that Zscaler has allowed us to do.
While local breakouts provide one benefit—cost savings—another has been quality of service. Schneider is a global company with over 100,000 connected users. Many countries don’t necessarily have the best local network architecture, and one of the things we were trying achieve was a good response time globally.
On top of that, we have a mobile strategy. We try to give people voice access to the network, and we enable BYOD (bring your own device) in every country where possible.
Classifying and protecting critical data
Our security strategy focuses on protecting the crown jewels, the most significant intellectual property in the company. This approach means that we have to be good at data classification. When identifying those crown jewels to protect, the natural tendency is to be super conservative; everything is a crown jewel. To instill discipline in the process, we designated one person whose role is to look at the identification of those crown jewels: our confidential information, sensitive IP, and of course, privacy data. We try to keep the crown jewel category very limited.
“Our security strategy focuses on protecting the crown jewels.”
When we certify each internal application, we look at both security and privacy criteria. We have a Data Protection Officer running our EU General Data Protection Regulation (GDPR) program. When we certify a new application, we do a privacy assessment at the same time as security to ensure that we are only collecting the data we need, that we properly notify the end users when we collect it, and we take precautions to protect it.
Security: the key to cloud transformation
Security is an obvious priority. Without it, the rest of cloud transformation cannot happen. We have been thinking a lot about the security model and considering how to look at the cloud security we wanted to adapt.
Security is never over. Incident response is a big topic, as is network segmentation, network monitoring, and endpoint protection. We have eight or nine security initiatives currently.
While data loss prevention (DLP) is one of the things we looked at, we decided it is a very heavy burden to take on. There are so many ways to exfiltrate data. So, to begin, we have taken a very light approach. Applications should be DLP-secure at the application level. Since we are inspecting traffic in both directions, it is a simple matter of looking for common things like personally identifiable information (PII) and set an alert or block them.
We’re using sandboxing technology to stay on top of advanced malware. We have deployed the Zscaler sandbox in the cloud to identify malware in files and internet sites our users may visit. We also believe that having a centralized identity management system is important to a successful cloud strategy. We use Active Directory and another product for single sign-on.
It is not very original, but you can feel very safe inside of a castle—but you don’t see the known unknowns on your extended perimeter. Our cloud strategy allows us to have a much more global approach.
Legacy approaches to security are complicated, requiring isolated mini-castles in every office. You have to replicate your headquarters security stack in every location. The cloud allows us to be much better at managing multiple sites in multiple countries with one control plane.
Challenges along the way
There was a bit of resistance to our cloud transformation, but it wasn’t massive. For us, the defining point was the global move to Salesforce, which was a great success. We managed to embrace it relatively quickly. From there, we had created our first success story. We had deployed it faster than we would have deployed a traditional on-premises solution.
We had a couple of issues in our journey to the cloud. The main problem we had was quality of service in certain places. The experience in the United States with Salesforce was not replicated everywhere, and we learned the hard way that some countries do not have the best infrastructure. We had to rethink the network globally.
While measuring results is important, we are not looking at one golden metric that will summarize all the good things we achieved through cloud transformation. But every time we launch a project, we do monitor its success. When we deployed the custom application environment, we looked at how we are modernizing the application base. For every application we are decommissioning, we have a gain we can chalk up. Cloud is now so pervasive in everything we do that we look at the metrics of utilization.
How is internet usage growing? Since our initial move to Salesforce, we have seen cloud usage grow steadily. That first step started us on this journey.
What not to do:
I would advise against going too broad. Don’t try to boil the ocean and do everything at one time. Deploy a pilot, have a success story, and build on that.
- Do not ignore the network implications. Look at the network architecture at the beginning of the cloud adoption. Try to get ahead of the problems.
What to do:
Cloud is a means to an end. You want to create customer and business value. Cloud enables machine learning, which enables voice. Voice has its own benefits that lead to collaboration opportunities.
- Cloud allows you to connect sites together more quickly. Just point all the users at the apps.
- When talking to my peers in the industry, a lot of the conversations I have revolved around big questions. What’s next? What is the next wave? How do we prepare for the new trends? We are forward-looking, while keeping security concerns front and center.
Architect Journey
FrieslandCampina
An International Dairy Conglomerate’s Network and Security Transformation Journey
Company: | FrieslandCampina |
Sector: | Dairy |
Driver: | Erik Klein |
Role: | Infrastructure Architect |
Revenue: | $14 billion |
Employees: | 20,000 |
Countries: | 34 |
Locations: | 120 |
Company IT Footprint: FrieslandCampina has 120 locations. It employs over 20,000 people worldwide, but, because those people work in shifts, the number of endpoints are not related to the number of employees. Within the office and industrial workspace, there are about 7,000 to 8,000 endpoints. It currently also has over 80 factories worldwide.
“I’m looking towards making the network totally irrelevant in the next five to seven years. The network will only be a transport mechanism that makes sure the application goes from A to B, but the data security itself is completely embedded in the communication stream.”
Erik Klein, Infrastructure Architect, FrieslandCampina
FrieslandCampina is a global producer of milk products and has created a sophisticated network to provide consistent and secure global connectivity. Erik Klein, lead infrastructure architect, tells its network and security transformation story.
In the words of Erik Klein:
Bringing milk products to the world
I joined FrieslandCampina in 2012. We are a global producer of milk products—we make cheese, infant and toddler nutrition, yogurts, skimmed and semi-skimmed milk, condensed milk, and health foods for athletes. We are based in the Netherlands, and over time we have expanded into other countries, such as Indonesia, Vietnam, Nigeria, Ghana, the U.S., and many others.
IT has an important role in manufacturing goods. From a production perspective, the availability of the operations technology (OT) environment (which is IT within the production environment with specific requirements) has a huge impact. For example, the raw milk can’t be stored for more than seventy-two hours. Any longer and it gets discarded, but it can’t be thrown away into a sewer, so it is a costly process to dispose of the spoiled product. Therefore, OT is used within the production environment to make sure that the production processes aren’t disrupted and work within the strict timeframes.
Within the OT environment, with the introduction of next generation PLCs (programmable logic controls), smart sensors, and other IoT developments, the number of IP-based endpoints will grow considerably over time.
Currently, we have about 80 factories worldwide. Some are more traditional, but some are really sophisticated and the Smart Factory is emerging. Therefore, the number of endpoints will grow in the OT environment.
Our cloud transformation
By the end of 2013, the cloud hype cycle started and there were more and more people looking at software as a service, localized content, and moving stuff to the cloud—in our case, Amazon Web Services.
“We needed to go into a transformation from a private MPLS-centered network to a public internet-centric network.”
Eventually, we realized, when going in that direction, the wide area network we had was no longer valid. We needed to go into a transformation from a private MPLS-centered network to a public internet-centric network.
At that time, the designs we made consisted of several boxes on location, and we realized that this would be too complicated and expensive to execute. So in 2014, we embarked on the transformation journey by moving the centralized proxy server to the cloud with the Zscaler cloud service, but still relying on the capabilities of Cisco routers for all other functions.
As cybersecurity became more of an issue, the Zscaler Cloud Firewall came into play. Moving security to the cloud was harder, because I had some internal push back, and there were some reorganization issues. But in 2016, we started a project to extend the boundaries of our network from a stateful firewall on a Cisco router at the FrieslandCampina location to cloud security, the Zscaler Cloud Firewall.
From every location, we then built IPsec tunnels to the Zscaler security service and used the proxy functionality as well as firewall functionality of Zscaler.
To overcome the limitations using PAC (proxy auto-configuration) files in the browser to get to the internet, we also transferred the routing within the whole LAN environment, so that the default route from every location would end up at the security layer of Zscaler. And that’s where we are today.
And then it was time for testing dynamic application routing.
Why secure internet local breakouts?
There are two reasons why we switched from a centralized proxy environment to the cloud-based proxy environment with local breakouts. Firstly, from a marketing perspective, the driver to break out locally was to get localized content. The web servers that you’re connecting to from each country should automatically give the content of the website in the local language, for example.
Secondly, FrieslandCampina had been using a number of different SaaS applications worldwide, so having it all centrally break out was, from a performance perspective, not a way forward. Also, the web content became richer and files were getting bigger, so there was more data to transport. From a localized content perspective, and the fact that users are using more and more SaaS applications, we realized that we would need to bring the end user to the internet (cloud) quicker.
Except for our private cloud, direct-connected VPC (virtual private cloud) on AWS—and we have connected that to our MPLS backbone so that’s still going over an MPLS link—everything else is being offloaded at the local site level and then travels to the closest Zscaler data center based on lowest latency, with a second closest Zscaler data center as backup. We do a measurement every six months to see if indeed those Zscaler nodes are still the quickest to reach.
Moving applications to AWS
FrieslandCampina is currently migrating applications to AWS based on various criteria, such as those applications that only need to be accessed at certain times. This service can’t be provided by our existing hosting provider, and keeping those servers at their location will be too expensive. On the other hand, T-Systems couldn’t always meet the requirements of the applications, resulting in an instance that was too big (too expensive) or too small (poor performance). And thirdly, AWS gives us the flexibility to temporarily upscale and downscale when required. With the capabilities of AWS, we could tailor to the actual requirements of the applications.
“The 2016 phase of the network transformation went very quickly and was completely non-disruptive.”
Last but not least, since not all applications are 24/7, we could use AWS elasticity to turn them off on the weekends, saving money in the process.
Improving SaaS access
In the early days, when we moved to the cloud proxy, we had our share of difficulties with the performance of Office 365. We really struggled to get that done correctly and have good performance now.
The 2016 phase of the network transformation went very quickly and was completely non-disruptive—people didn’t even know we moved security to Zscaler. Nobody really noticed that we went from centralized to decentralized, except that some of the applications became quicker.
Also, Zscaler was very quick in communicating what they were doing about cyber security threats like, but not limited to, WannaCry and NotPetya. They were quicker to communicate the impact within their environment than other partners. They really did a good job on that one.
Deploying SD-WAN
Right now, we are going towards a full SD-WAN (software-defined wide area network). Our strategy involves connecting five FrieslandCampina locations to the SD-WAN environment, and that the SD-WAN environment has an NNI (network-to-network interface) with our existing Verizon network. With a full-blown SD-WAN deployment our redundancy plan includes redundant internet lines and universal customer premises equipment.
For locations that use applications that require MPLS services, a fit for purpose MPLS line that is smaller than our legacy MPLS circuit is supplied.
Historically for every location, except locations with call center functionality, the MPLS line was approximately 5 megabits per second, while the internet lines are a lot bigger. We also have a failover from MPLS to the internet and the two internet lines back each other up. Our goal is to guarantee an experience level agreement (XLA) at the application level, rather than a service level agreement (SLA) based on availability and time to repair. We are aiming for predictable behavior and end-user experience on an application-based context (device, location, connectivity).
The SD-WAN has what they call universal CPEs at each location. And those universal CPEs will have network function virtualization on them. Actually it’s a device for compute and storage with a hypervisor, which runs virtual services which are required by either the SD-WAN service itself or the application acceleration. Other virtual network functions can be added if and when required. There will be a growing number of network function virtualizations that we can deploy on those devices.
Picking an SD-WAN partner
In 2017, we initiated an RFI for, amongst other services, a new WAN service.
The vendors invited to the RFI were only given business requirements and we asked them to really innovate with a disruptive approach. We selected eight vendors to enter the RFP phase, and we started eliminating vendors based on their offering and presentation of the solution. In the end, three vendors were selected to give us their best and final offer, namely NTT, Interoute (both proposing the Silver Peak SD-WAN solution), and Verizon (proposing a combination of Viptela and Riverbed).
As part of the RFP process, we asked each vendor to present a reference customer where they had already deployed the proposed solution. And based on discussions with those customers, we made the final selection. The vendor testimonials were very important to us in the final phase of the RFP process.
Things to consider
- Do not invest in a traditional network. Don’t do any investments in your existing MPLS with an internet backup network. That’s old school. Just make sure that you know how your traffic is routing—so where your end users are and where your applications are—and make sure that you create a network where, based on the applications, the quickest, most efficient route will be taken. In the end, users are not interested in technology, they are only concerned that the applications they are working with on a day-to-day basis perform well and perform constantly. If you have an application that has a 2.5 millisecond response time throughout all of Asia, nobody is complaining. But if you have one country that has a response time of one and another of four, then they start talking to each other and start complaining.
- People are traveling more and working outside of the office, and those people are diverse. Currently, we are bringing people that are roaming back into our network via two central remote access (VPN concentrated) locations. With the use of tools like the Zscaler App, we are looking at alternatives to connect roaming users to internal applications.
- In the end, if you have your software-defined wide area network, local area network, and software-defined data centers, you need an orchestrator of orchestrators above that to make sure the policies you set on an application, or at a higher level, flows down to the LAN, the WAN, and the data center. And the next step is to invest in security in the session between consumer and application.
- I’m looking towards making the network totally irrelevant in the next five to seven years. The network will only be a transport mechanism that makes sure the application goes from A to B, but the data security itself is completely embedded in the communication stream. For example, based on the identity of the client and on the identity of the application, a secure communication will be set up between them. That will be my next focus, and will be around the 2025-28 timeframe. It could happen sooner, but developments within our company need a business case for change; there needs to be funding for it, and so on. Not only is the development of the technology driving this, but also the adaptation and the willingness to spend money in new areas.
Chapter 2 Takeaways
The cloud is the new corporate data center. As applications migrate to cloud infrastructure, this invariably creates opportunities and challenges for CIOs, CTOs, and CISOs. To ensure a successful migration, it’s vital that the business objectives are clearly defined, stakeholders identified, and strategy and priorities established and widely communicated.
Key considerations for embarking on your application transformation journey:
- List and prioritize your applications
- Consider data security and risks on an application basis—evaluate and classify each application, rank the cost and impact to the organization, determine mission criticality, productivity impact
- List top business and technical goals for each application—compliance requirements, user experience, reliability, performance, licensing costs
- Analyze each application to determine which can be migrated quickly to the cloud and which would require more transformation effort—architectures, where it is hosted; is it a shared service; platforms and programming languages used
- Engage, inform, and train key stakeholders in the process
- In the next chapter, we’ll discuss how application transformation drives network transformation, and how this next stage is pivotal in enabling digital transformation for the enterprise.