A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

The next BriefingsDirect cloud adoption best practices discussion focuses on some of the strictest security and performance requirements that are newly being met for an innovative global finance services deployment.

We’ll now explore how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe. Due to the needs for localized data storage, privacy regulations compliance, and lightning-fast transactions speeds, this extreme cloud-use formula pushes the boundaries — and possibilities — for hybrid cloud solutions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us now as we hear from an executive at Mastercard and a cloud deployment strategist about a new, cutting-edge use for cloud infrastructure. Please welcome Paolo Pelizzoli, Executive Vice President and Chief Operating Officer at Realtime Payments International for Mastercard, and Robert Christiansen, Vice President and Cloud Strategist at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening with cloud adoption that newly satisfies such major concerns as strict security, localized data, and top-rate performance? Robert, what’s allowing for a new leading edge when it comes to the public clouds’ use?

Christiansen: A number of new use cases have been made public. For the front runners like Capital One [Financial Corp.], and some other organizations, they have taken core applications that would otherwise be considered sacred and are moving them to cloud platforms. Those have become more-and-more evident and visible. The Capital One CIO, Robert Alexander, has been very vocal about that.

Christiansen

So now others have followed suit. And the US federal government regulators have been much more accepting around the audit controls. We are seeing a lot more governance and automation happening as well. A number of the business control objectives – from security to the actual technologies to the implementations — are becoming more accepted practices today for cloud deployment.

So, by default, folks like Paolo at Mastercard are considering the new solutions that could give them a competitive edge. We are just seeing a lot more acceptance of cloud models over the last 18 months.

Gardner: Paolo, is increased adoption a matter of gaining more confidence in cloud, or are there proof points you look for that opens the gates for more cloud adoption?

Compliance challenges cloud 

Pelizzoli: As we see what’s happening in the world around nationalism, the on-the-soil [data sovereignty] requirements have become much more prevalent. It will continue, so we need the ability to reach those countries, deploy quickly, and allow data persistence to occur there.

Pelizzoli

The adoption side of it is a double-edged sword. I think everybody wants to get there, and everybody intuitively knows that they can get there. But there are a lot of controls around privacy, as well as the SOX and SOC 1 reports compliance, and everything else that needs to be adjusted to take into the cloud into account. And if the cloud is rerouting traffic because one zone goes down and it flips to another zone, is that still within the same borders, is it still compliant, and can you prove that?

So while technologically this all can be done, from a compliance perspective there are still a lot of different boxes left to check before someone can allow payments data to flow actively across the cloud — because that’s really the panacea.

Gardner: We have often seen a lag between what technology is capable of and what regulations, standards, and best practices allow. Are we beginning to see a compression of that lag? Are regulators, in effect, catching up to what the technology is capable of?

Pelizzoli: The technology is still way out in the front. The regulators have a lot on their plates. We can start moving as long as we adhere to all the regulations, but the regulations between countries and within some countries will continue to have a lagging effect. That being said, you are beginning to see governments understand how sanctions occur and they want their own networks within their own borders.

Those are the types of things that require a full-fledged payments network that predated the public Internet to begin to gain certain new features, functions, and capabilities. We are now basically having to redo that payments-grade network.

Gardner: Robert, the technology is highly capable. We have a major player like Mastercard interested in solving their new globalization requirements using cloud. What can help close the adoption gap? Does hybrid cloud help solve the log-jam?

Christiansen: The regionalization issues are upfront, if not the number-one requirement, as Paolo has been talking about. I think about South Korea. We just had a meeting with the largest banking folks there. They are planning now for their adoption of public cloud, whether it’s Microsoft Azure, Amazon Web Services (AWS), or Google Cloud. But the laws are just now making it available.

Prior to January 1, 2019, the laws prohibited public cloud use for financial services companies, so things are changing. There is lot of that kind of thing going on around the globe. The strategy seems to be very focused on making the compute, network, and storage localized and regionalized. And that’s going to require technology grounding in some sort of connectivity across on-premises and public, while still putting the proper security in-place.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

So, you may see more use of things like OpenShift or Cloud Foundry’s Pivotal platform and some overlay that allows folks to take advantage of that so that you can push down an appliance, like a piece of equipment, into a specific territory.

I’m not certain as to the cost that you incur as a result of adding such an additional local layer. But from a rollout perspective, this is an upfront conversation. Most financial organizations that globalize want to be able to develop and deploy in one way while also having regional, localized on-premises services. And they want it to get done as if in a public cloud. That is happening in a multiple number of regions.

Gardner: Paolo, please tell us more about International Realtime Payments. Are you set up specifically to solve this type of regional-global deployment problem, or is there a larger mandate? What’s the reason for this organization?

Hybrid help from data center to the edge

Pelizzoli: Mastercard made an acquisition a number of years ago of Vocalink. Vocalink did real-time secure interbank funds transfer, and linkage to the automated clearing house (ACH) mechanism for the United Kingdom (UK), including the BACS and LINK extensions to facilitate payments across the banking system. Because it’s nationally critical infrastructure, and it’s bank-to-bank secure funds transfer with liquidity checks in place, we have extended the capabilities. We can go through and perform the same nationally critical functions for other governments in other countries.

Vocalink has now been integrated into Mastercard, and Realtime Payments will extend the overall reach, to include the debit/credit loyalty gift “rails” that Mastercard has been traditionally known for.

I absolutely agree that you want to develop one way and then be able to deploy to multiple locations. As hybrid cloud has arrived, with the advent of Microsoft Azure Stack and more recently AWS’s Outposts, it gives you the cloud inside of your data center with the same capabilities, the same consoles, and the same scripting and automation, et cetera.

As we see those mechanisms become richer and more robust, we will go through and be deploying that approach to any and all of our resources — even being embedded at the edge within a point of sale (POS) device.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

So, if you can secure the transaction information, by abstracting out all the other stuff and doing some interesting cryptography that only those governments know about, the [transaction] flow will still go through [the cloud] but the data will still be there, at the edge, and on the device or appliance.

We already provide for detection and other value-added services for the assurance of the banks, all the way down to the consumers, to protect them. As we start going through and seeing globalization — but also the regionalizationdue to regulation – it will be interesting to uncover fraudulent activity. We already have unique insights into that.

No more noisy neighbors

Christiansen: Getting back to the hybrid strategy, AWS Outposts and Azure Stack have created the opportunity for such globalization at speed. Someone can plug in a network and power cable and get a public cloud-like experience yet it’s on an on-premises device. That opens a significant number of doors.

You eliminate multi-tenancy issues, for example, which are a huge obstacle when it comes to compliance. In addition, you have to address “noisy neighbor” issues, performance issues, failovers, and stuff like that that are caused by multi-tenancy issues.

If you’re able to simply deploy a cloud appliance that is self-aware, you have a whole other trajectory toward use of the cloud technology. I am actively encouraged to see what Microsoft and Amazon can do to press that further. I just wanted to tag that onto what Paolo was talking about.

Pelizzoli: Right, and these self-contained deployments can use Kubernetes. In that way, everything that’s required to go through and run autonomously — even the software-defined networks (SDNs) – can be deployed via containers. It actually knows where its point of persistence needs to be, for data sovereignty compliance, regardless of where it actually ends up being deployed.

This comes back to an earlier comment about the technology being quite far ahead. It is still maturing. I don’t think it is fully mature to everybody’s liking yet. But there are some very, very encouraging steps.

As long as we go in with our eyes wide open, there are certain things that will allow us to go through and use those technologies. We still have some legacy stuff pinned to bare-metal hardware. But as things start behaving in a hybrid cloud fashion as we’re describing, and once we get all the security and guidelines set up, we can migrate off of those legacy systems at an accelerated pace.

Gardner: It seems to me that Realtime Payments International could be a bellwether use case for such global hybrid cloud adoption. What then are the checkboxes you need to sign off on in order to be able to use cloud to solve your problems?

Perpetual personal data protection

Pelizzoli: I can’t give you all the criteria, but the persistence layer needs to be highly encrypted. The transports need to be highly encrypted. Every time anything is persisted, it has to go through a regulatory set of checks, just to make sure that it’s allowed to do what it’s being asked to do. We need a lot of cleanliness in the way metrics are captured so that you can’t use a metric to get back to a person.

If nothing else, we have learned a lot from the recent [data intrusion] announcements by FacebookMarriott, and others. The data is quite prevalent out there. And payments data, just like your hospital data, is the most personal.

As we start figuring out the nuances of regulation around an individual service, it must be externalized. We have to be able to literally inject solutions to regulatory requirements – and not by coding it. We can’t be creating any payments that are ambiguous.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

That’s why we are starting to see a lot of effort going into how artificial intelligence (AI) can help. AI could check services and configurations to test for every possibility so that there isn’t a “hole” that somebody can go through with a certain amount of credentials.

As we go forward, those are the types of things that — when we are in a public cloud — we need to account for. When we were all internal, we had a lot of perimeter defenses. The new perimeter becomes more nebulous in a public cloud. You can create virtual private clouds, but you need to be very wary that you are expanding time factors or latency.

Gardner: If you can check off these security and performance requirements, and you are able to start exploiting the hybrid cloud continuum across different localities, what do you get? What are the business outcomes you’re seeking?

Common cloud consistency 

Pelizzoli: A couple of things. One is agility, in terms of being able to deploy to two adjacent countries, if one country has a major outage. That means ease of access to a payments-grade network — without having to go through and put in hardware, which will invariably fail.

Also, the ability to scale quickly. There is an expected peak season for payments, such as around the Christmas holidays. But there could be an unexpected peak season based on bad news — not a peak season, but a peak day. How do you go through and have your systems scale within one country that wasn’t normally producing a lot of transactions? All of a sudden, now it’s producing 18 times the amount of transactions.

Those types of things give us a different development paradigm. We have a lot of developers. A [common cloud approach] would give us consistency, and the ability to be clean in how we automate deployment; the testing side of it, the security checks, etc.

Before, there were a lot of different ways of doing development, depending on the language and the target. Bringing that together would allow increased velocity and reduced cost, in most cases. And what I mean by “most cases” is I can use only what I need and scale as I require. I don’t have to build for the worst possible day and then potentially never hit it. So, I could use my capacity more efficiently.

Gardner: Robert, it sounds like major financial applications, like a global real-time payment solution, are getting from the cloud what startups and cloud-native organizations have taken for granted. We’re now able to take the benefits of cloud to some of the most extreme and complex use cases. 

Cloud-driven global agility

Christiansen: That’s a really good observation, Dana. A healthcare organization could use the same technologies to leverage an industrial-strength transaction platform that allows them to deliver healthcare solutions globally. And they could deem it as a future-proof infrastructure solution. 

One of the big advantages of the public cloud has been the isolation of all those things that many central IT teams have had to do day-in and day-out. That is to patch releases, upgrade processes, constantly looking at the refresh. They call it painting the Golden Gate Bridge – where once you finish painting the bridge, you have to go back and do it all over again. And a lot of that effort and money goes into that refresh process. 

And so they are asking themselves, “Hey, how can we take our $3 or $4 billion IT spend, and take x amount of that and begin applying it toward innovation?”

Right now there is so much rigidity. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” 

And if someone can take a piece out of that equation, all things are eligible. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” Right now there is so much rigidity, but the balance against what Paolo was talking about — the industrial-grade network and transaction framework — to get this stuff done cannot be relinquished.

So people are asking a lot of the same questions. They come in and ask us at CTP, “Hey, what use-cases are actually in place today where I can start leveraging portions of the public cloud so I can start knocking off pieces?”

Paolo, how do you use your existing infrastructure, and what portion of cloud enablement can you bring to the table? Is it cloud-first, where you say, “Hey, everything is up for grabs?” Or are you more isolated into using cloud only in a certain segment?

Follow a paved path of patterns

Pelizzoli: Obviously, the endgame is to be in the cloud 100 percent. That’s utopian. How do we get there? There is analysis being done. It depends if we are talking about real-time payments, which is actually more prepared to go into the cloud than some of the core processing that handles most of North America and Europe from an individual credit card or debit card swipe. Some of those core pieces need more rewiring to take advantage of the cloud.

When we look at it, we are decomposing all of the legacy systems and seeing how well they fit in to what we call a paved path of patterns. If there is a paved path for a specific type of pattern, we put it on the list of things to transition to, as being built as a cloud-native service. And then we run it alongside its parent for a while, to test it, through stressful periods and through forced chaos. If the segment goes down, where does it flip over to? And what is the recovery time?

The one thing we cannot do is in any way increase latency. In fact, we have some very aggressive targets to reduce latency wherever we can. We also want to improve the recovery and security of the individual components, which we end up calling value-added services.

There are some basic services we have to provide, and then value-added services, which people can opt in or opt out of. We do have a plan and strategy to go through and prioritize that list.

Gardner: Paolo, as you master hybrid cloud, you must have visibility and monitoring across these different models. It’s a new kind of monitoring, a new kind of management.

What do you look to from CTP and HPE to help attain new levels of insight so you can measure what’s going on, and therefore optimize and automate?

Pelizzoli: CTP has been a very good and integral part of our first steps into the cloud. 

Now, I will give you one disclaimer. We have some companies that are Mastercard companies that are already in the cloud, and were born in the cloud. So we have experience with AWS, we have experience with Azure, and we have some experience with Google Cloud Platform.

It’s not that Mastercard isn’t in the cloud already, it is. But when you start taking the entire plant and moving it, we want to make sure that the security controls, which CTP has been helping ratify, get extended into the cloud — and where appropriate, actually removed, because there are better ones in the cloud today.

Extend the cloud management office 

Now, the next phase is to start building out a cloud management office. Our cloud management office was created early last year. It is now getting the appropriate checks and audits from finance, the application teams, the architecture team, security teams, and so on.

As that list of prioritized applications comes through, they have the appropriate paved path, checks, and balance. If there are any exceptions, it gets fiercely debated and will either get a pass or it will not. But even if it does not, it can still sit within our on-premises version of the cloud, it’s just more protected.

As we route all the traffic, that is where there is going to be a lot of checks within the different network hops that it has to take to prevent certain information from getting outside when it’s not appropriate.

Gardner: And is there something of a wish list that you might have for how to better fulfill the mandate of that cloud management office?

Pelizzoli: We have CTP, which HPE purchased along with RedPixie. They cover, between those two acquisitions, all of the public cloud providers.

Now, the cloud providers themselves are selling you the next feature-function to move themselves ahead of their competitor. CTP and RedPixie are taking the common denominator across all of them to make sure that you are not overstepping promises from one cloud provider into another cloud provider. You are not thinking that everybody is moving at the same pace.

They also provide implementation capabilities, migration capabilities, and testing capabilities through the larger HPE organization. The fact is we have strong relationships with Microsoft and with Amazon, and so does HPE. If we can bring the collective muscle of Mastercard, HPE, and the cloud providers together, we can move mountains.

Gardner: We hear folks like Paolo describe their vision of what’s possible when you can use the cloud providers in an orchestrated, concerted, and value-added approach. 

Other people in the market may not understand what is going on across multi-cloud management requirements. What would you want them to know, Robert?

O brave new hybrid world

Christiansen: A hybrid world is the true reality. Just the complexity of the enterprise, no matter what industry you are in, has caused these application centers of gravity. The latency issues between applications that could be moved to cloud or not, or impacted by where the data resides, these have created huge gravity issues, so they are unable to take advantage of the frameworks that the public clouds provide. 

So, the reality is that the public cloud is going to have to come down into the four walls of the enterprise. As a result of that, we are seeing an explosion of the common abstraction — there is going to be some open sourced framework for all clouds to communicate and to talk and behave alike.

Over the past decade, the on-premises and OpenStack world has been decommissioning the whole legacy technology stack, moving it off to the side as a priority, as they seek to adopt cloud. The reality now is that we have regional, government, and data privacy issues, we have got all sorts of things that are pulling it all back internally again. 

Out of all this chaos is going to rise the phoenix of some sort of common framework. There has to be. There is no other way out of this. We are already seeing organizations such as Paolo’s at Mastercard develop a mandate to take the agile step forward.

They want somebody to provide the ability to gain more business value versus the technology, to manage and keep track of infrastructure, and to future-proof that platform. But at the same time, they want a technology position where they can use common frameworks, common languages, things that give interoperability across multiple platforms. That’s where you are seeing a huge amount of investment. 

I don’t know if you recently saw that HashiCorp got $100 million inadditional funding, and they have a valuation of almost $2 billion. This is a company that specializes in sitting in that space. And we are going to see more of that.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

And as folks like Mastercard drive the requirements, the all-in on one public cloud mentality is going to quickly evaporate. These platforms absolutely have to learn how to play together and get along with on-premises, as well as between themselves.

Gardner: Paolo, any last thoughts about how we get cloud providers to be team players rather than walking around with sharp elbows?

Tech that plays well with others

Pelizzoli: I think it’s actually going to end up being a lot more of the technology that’s being allowed to run on these cloud platforms is going to take care of it.

I mentioned Kubernetes and Docker earlier, and there are others out there. The fact that they can isolate themselves from the cloud provider itself is where it will neutralize some of the sharp elbowing that goes on.

Now, there are going to be features that keep coming up that I think companies like ours will take a look at and start putting workloads where the latest cutting-edge feature gives us a competitive advantage and then wait for other cloud providers to go throughand catch up. And when they do, we can then deploy out on those. But those will be very conscious decisions. 

I don’t think that there is a one cloud fits all, but where appropriate we will go throughand be absolutely multi-cloud. Where there is defining difference, we will go throughand select the cloud provider that best suits in that area to cover that specific capability.

Gardner: It sounds like these extreme use cases and the very important requirements that organizations like Mastercard have will compel this marketplace to continue to flourish rather than become a one-size-fits-all. So an interesting time that we are seeing the maturation of the applications and use cases actually start to create more of a democratization of cloud in the marketplace.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Advertisements
Posted in application transformation, Cloud computing, Cyber security, Docker, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, multicloud, Networked economy, Security | Tagged , , , , , , , , , , , | Leave a comment

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

The next BriefingsDirect IT operations strategy panel discussion explores how the IT4IT[tm] Reference Architecture for IT management creates demonstrated business benefits – in many ways, across many types of organizations. 

Since its delivery in 2015 by The Open GroupIT4IT has focused on defining, sourcing, consuming, and managing services across the IT function’s value stream to its stakeholders. Among its earliest and most ardent users are IT vendors, startups, and global professional services providers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how this variety of highly efficient businesses and their IT organizations make the most of IT4IT – often as a complimentary mix of frameworks and methodologies — we are now joined by our panel:

The panel discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts:

Gardner: Big trends are buffeting business in 2019. Companies of all kinds need to attain digital transformation faster, make their businesses more intelligent and responsive to their markets, and improve end user experiences. So, software development, applications lifecycles, and optimizing how IT departments operate are more important than ever. And they need to operate as a coordinated team, not in silos.

Lars, why is the IT4IT standard so powerful given these requirements that most businesses face?

Rossen: There are a number of reasons, but the starting point is the fact that it’s truly end-to-end. IT4IT starts from the planning stage — how to convert your strategy into actionable projects that are being measured in the right manner — all the way to development, delivery of the service, how to consume it, and at the end of the day, to run it.

There are many other frameworks. They are often very process-oriented, or capability-oriented. But IT4IT gives you a framework that underpins it all. Every IT organization needs to have such a framework in place and be rationalized and well-integrated. And IT4IT can deliver that.

Gardner: And IT4IT is designed to help IT organizations elevate themselves in terms of the impact they have on the overall business.

Mark, when you encounter someone who says IT4IT, “What is that?” What’s your elevator pitch, how do you describe it so that a lay audience can understand it?

Bodman: I pitch it as a framework for managing IT and leave it at that. I might also say it’s an operating model because that’s something a chief information officer (CIO) or a business person might know.

If it’s an individual contributor in one of the value streams, I say it’s a broader framework than what you are doing. For example, if they are a DevOps guy, or a maybe a Scaled Agile Framework (SAFe) guy, or even a test engineer, I explain that it’s a more comprehensive framework. It goes back to the nature of IT4IT being a hub of many different frameworks — and all designed as one architecture.

Gardner: Is there an analog to other business, or even cultural, occurrences that IT4IT is to an enterprise? 

Rossen: The analogy I have is that you go to The Lord of the Rings, and IT4IT is the “one ring to rule them all.” It actually combines everything you need.

Gardner: Why do companies need this now? What are the problems they’re facing that requires one framework to rule them all?

Everyone, everything on the same page

Esler: A lot of our clients have implemented a lot of different kinds of software — automation software, orchestration software, and portals. They are sharing more information, more data. But they haven’t changed their operating model.

Using IT4IT is a good way to see where your gaps are, what you are doing well, what you are not doing not so well, and how to improve on that. It gives you a really good foundation on knowing the business of IT.

Bennett: We are hearing in the field is that IT departments are generally drowning at this point. You have a myriad of factors, some of which are their fault and some of which aren’t. The compliance world is getting nightmare-strict. The privacy laws that are coming in are straining what are already resource-constrained organizations. At the same time, budgets are being cut.

The other side of it is the users are demanding more from IT, as a strategic element as opposed to simply a support organization. As a result, they are drowning on a daily basis. Their operating model is — they are still running on wooden wheels. They have not changed any of their foundational elements.

If your family has a spending problem, you don’t stop spending, you go on a budget. You put in an Excel spreadsheet, get all the data into one place, pull it together, and you figure out what’s going on. Then you can execute change. That’s what we do from an IT perspective. It’s simply getting everything in the same place, on the same page, and talking the same language. Then we can start executing change to survive.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: Because IT in the past could operate in silos, there would be specialization. Now we need a team-sport approach. Mark, how does IT4IT help that?

Bodman: An analogy is the medical profession. You have specialists, and you have generalist doctors. You go to the generalist when you don’t really know where the problem is. Then you go to a specialist with a very specific skill-set and the tools to go deep. IT4IT has aimed at that generalist layer, then with pointers to the specialists.

Gardner: IT4IT has been available since October 2015, which is a few years in the market. We are now seeing different types of adoption patterns—from small- to medium-size businesses (SMBs) and up to enterprises. What are some “rubber meets the road” points, where the value is compelling and understood, that then drive this deeper into the organization?

Where do you see IT4IT as an accelerant to larger business-level improvements?

Success via stability

Vijaykumar: When we look at the industry in general there are a lot of disruptive innovations, such as cloud computing taking hold. You have other trends like big data, too. These are driving a paradigm shift in the way IT is perceived. So, IT is not only a supporting function to the business anymore — it’s a business enabler and a competitive driver.

Now you need stability from IT, and IT needs to function with the same level of rigor as a bank or manufacturer. If you look at those businesses, they have reference architectures that span several decades. That stability was missing in IT, and that is where IT4IT fills a gap — we have come up with a reference architecture.

What does that mean? When you implement new tooling solutions or you come up with new enterprise applications, you don’t need to rip apart and replace everything. You could still use the same underlying architecture. You retain most of the things — even when you advance to a different solution. That is where a lot of value gets created.

Esler: One thing you have to remember, too, is that this is not just about new stuff. It’s not just about artificial intelligence (AI)Internet of Things (IoT), big data, and all of that kind of stuff — the new, shiny stuff. There is still a lot of old stuff out there that has to be managed in the same way. You have to have a framework like IT4IT that allows you to have a hybrid environment to manage it all.

Gardner: The framework to rule all frameworks.

Rossen: That also goes back to the concept of multi-modal IT. Some people say, “Okay, I have new tools for the new way of doing stuff, and I keep my old tools for the old stuff.”

But, in the real world, these things need to work together. The services depend on each other. If you have a new smart banking application, and you still have a COBOL mainframe application that it needs to communicate with, if you don’t have a single way of managing these two worlds you cannot keep up with the necessary speed, stability, and security.

Gardner: One of the things that impresses me about IT4IT is that any kind of organization can find value and use it from the get-go. As a start-up, an SMB, Jerrod, where you are seeing the value that IT4IT brings?

Solutions for any size business

Bennett: SMBs have less pain, but proportionally it’s the same, exact problem. Larger enterprises have enormous pain, the midsize guys have medium pain, but it’s the same mess.

But the SMBs have an opportunity to get a lot more value because they can implement a lot more of this a lot faster. They can even rip up the foundation and start over, a greenfield approach. Most large organizations simply do not have that capability.

The same kind of change – like in big data, how much data is going to be created in the next five years versus the last five years? That’s universal, everyone is dealing with these problems.

Gardner: At the other end of the scale, Mark, big multinational corporations with sprawling IT departments and thousands of developers — they need to rationalize, they need to limit the number of tools, find a fit-for-purpose approach. How does IT4IT help them? 

Bodman: It helps to understand which areas to rationalize first, that’s important because you are not going to do everything at once. You are going to focus on your biggest pain points.

The other element is the legacy element. You can’t change everything at once. There are going to be bigger rocks, and then smaller rocks. Then there are areas where you will see folks innovate, especially when it comes to the DevOps, new languages, and new platforms that you deploy new capabilities on.

What IT4IT allows is for you to increasingly interchange those parts. A big value proposition of IT4IT is standardizing those components and the interfaces. Afterward, you can change out one component without disrupting the entire value chain.

Gardner: Rob, complexity is inherent in IT. They have a lot on their plate. How does the IT4IT Reference Architecture help them manage complexity?

Reference architecture connects everything

Akershoek: You are right, there is growing complexity. We have more services to manage, more changes and releases, and more IT data. That’s why it’s essential in any sized IT organization to structure and standardize how you manage IT in a broader perspective. It’s like creating a bigger picture.

Most organizations have multiple teams working on different tools and components in a whole value chain. I may have specialized people for security, monitoring, the service desk, development, for risk and compliance, and for portfolio management. They tend to optimize their own silo with their own practices. That’s what IT4IT can help you with — creating a bigger picture. Everything should be connected.

Esler: I have used IT4IT to help get rid of those very same kinds of silos. I did it via a workshop format. I took the reference architecture from IT4IT and I got a certain number of people — and I was very specific about the people I wanted — in the room. In doing this kind of thing, you have to have the right people in the room.

We had people for service management, security, infrastructure, and networking — just a whole broad range across IT. We placed them around the table, and I took them through the IT4IT Reference Architecture. As I described each of the words, which meant function, they began to talk among themselves, to say, “Yes, I had a piece of that. I had this piece of this other thing. You have a piece of that, and this piece of this.”

It started them thinking about the larger functions, that there are groups performing not just the individual pieces, like service management or infrastructure.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: IT4IT then is not muscling out other aspects of IT, such as Information Technology Infrastructure Library (ITIL), The Open Group Architecture Framework (TOGAF), and SAFe. Is there a harmonizing opportunity here? How does IT4IT fit into a larger context among these other powerful tools, approaches, and methodologies?

Rossen: That’s an excellent question, especially given that a lot of people into SAFe might say they don’t need IT4IT, that SAFe is solving their whole problem. But once you get to discuss it, you see that SAFe doesn’t give you any recommendation about how tools need to be connected to create the automated pipeline that SAFe relies on. So IT4IT actually compliments SAFe very well. And that’s the same story again and again with the other ones.

The IT4IT framework can help bring those two things – ITIL and SAFe — together without changing the IT organizations using them. ITIL can still be relevant for the helpdesk, et cetera, and SAFe can still function — and they can collaborate better.

Gardner: Varun, another important aspect to maturity and capability for IT organizations is to become more DevOps-oriented. How does DevOps benefit from IT4IT? What’s the relationship?

Go with the data flow

Vijaykumar: When we talk about DevOps, typically organizations focus on the entire service design lifecycle and how it moves into transition. But the relationship sometimes gets lost between how a service gets conceptualized to how it is translated into a design. We need to use IT4IT to establish traceability, to make sure that all the artifacts and all the information basically flows through the pipeline and across the IT value chain.

The way we position the IT4IT framework to organizations and customers is very important. A lot of times people ask me, “Is this going to replace ITIL?” Or, “How is it different from DevOps?”

The simplest way to answer those questions is to tell them that this is not something that provides a narrative guidance. It’s not a process framework, but rather an information framework. We are essentially prescribing the way data needs to flow across the entire IT value chain, and how information needs to get exchanged.

It defines how those integrations are established. And that is vital to having an effective DevOps framework because you are essentially relying on traceability to ensure that people receive the right information to accept services, and then support those services once they are designed.

Gardner: Let’s think about successful adoption, of where IT4IT is compelling to the overall business. Jerrod, among your customers where does IT4IT help them?

Holistic strategy benefits business

Bennett: I will give an example. I hate the word, but “synergy” is all over this. Breaking down silos and having all this stuff in one place — or at least in one process, one information framework — helps the larger processes get better.

The classic example is Agile development. Development runs in a silo, they sit in a black box generally, in another building somewhere. Their entire methodology of getting more efficient is simply to work faster.

So, they implement sprints, or Agile, or scrum, or you name it. And what you recognize is they didn’t have a resource problem, they had a throughput problem. The throughput problem can be slightly solved using some of these methodologies, by squeezing a little bit more out of their glides.

But what you find, really, is they are developing the wrong thing. They don’t have a strategic element to their businesses. They simply develop whatever the heck they decide is important. Only now they develop it really efficiently. But the output on the other side is still not very beneficial to the business.

If you input a little bit of strategy in front of that and get the business to decide what it is that they want you to develop – then all of a sudden your throughput goes through the roof. And that’s because you have broken down barriers and brought together the [major business elements], and it didn’t take a lot. A little bit of demand management with an approval process can make development 50 percent more efficient — if you can simply get them working on what’s important.

It’s not enough to continue to stab at these small problems while no one has yet said, “Okay, timeout. There is a lot more to this information that we need.” You can take inspiration from the manufacturing crisis in the 1980s. Making an automobile engine conveyor line faster isn’t going to help if you are building the wrong engines or you can’t get the parts in. You have to view it holistically. Once you view it holistically, you can go back and make the assembly lines work faster. Do that and sky is the limit.

Gardner: So IT4IT helps foster “simultaneous IT operations,” a nice and modern follow-on to simultaneous engineering innovations of the past.

Mark, you use IT4IT internally at ServiceNow. How does IT4IT help ServiceNow be a better IT services company?

IT to create and consume products

Bodman: A lot of the activities at ServiceNow are for creating the IT Service Management (ITSM) products that we sell on the market, but we also consume them. As a product manager, a lot of my job is interfacing with other product managers, dealing with integration points, and having data discussions.

As we make the product better, we automatically make our IT organization better because we are consuming it. Our customer is our IT shop, and we deploy our products to manage our products. It’s a very nice, natural, and recursive relationship. As the company gets better at product management, we can get more products out there. And that’s the goal for many IT shops. You are not creating IT for IT’s sake, you are creating IT to provide products to your customers.

Gardner: Rob, at Fruition Partners, a DXE company, you have many clients that use IT4IT. Do you have a use case that demonstrates how powerful it can be?

Akershoek: Yes, I have a good example of an insurance organization where they have been forced to reduce significantly the cost to develop and maintain IT services.

Initially, they said, “Oh, we are going to automate and monitor DevOps.” When I showed them IT4IT they said, “Well, we are already doing that.” And I said, “Why don’t you have the results yet? And they said, “Well, we are working on it, come back in three months.”

IT4IT saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if had continued the way we did it in the past.”

But after that period of time, they still were not succeeding with speed. We said, “Use IT4IT, take it to specific application teams, and then move to cloud, in this case, Azure Cloud. Show that you can do it end-to-end from strategy into an operation, end-to-end in three months’ time and demonstrate that it works.”

And that’s what has been done, it saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if we had continued the way we did it in the past.” 

Gardner: John, at HPE Pointnext, you are involved with digital transformation, the highest order of strategic endeavors and among the most important for companies nowadays. When you are trying to transform an organization – to become more digital, data-driven, intelligent, and responsive — how does IT4IT help?

Esler: When companies do big, strategic things to try and become a digital enterprise, they implement a lot of tools to help. That includes automation and orchestration tools to make things go faster and get more services out.

But they forget about the operating model underneath it all and they don’t see the value. A big drug company I worked with was expecting a 30 percent cost reduction after implementing such tools, and they didn’t get it. And they were scratching their heads, asking, “Why?”

We went in and used IT4IT as a foundation to help them understand where they needed change. In addition to using some tools that HPE has, that helped them to understand — across different domains, depending on the level of service they want to provide to their customers — what they needed to change. They were able to learn what that kind of organization looks like when it’s all said and done.

Gardner: Lars, Micro Focus has 4,000 to 5,000 developers and needs to put software out in a timely fashion. How has IT4IT helped you internally to become a better development organization?

Streamlining increases productivity

Rossen: We used what is by now a standard technique in IT4IT, to do rationalization. Over a year, we managed to convert it all into a single tool chain that 80 percent of the developers are on.

With that we are now much more agile in delivering products to market. We can do much more sharing. So instead of taking a year, we can do the same easily every three months. But we also have hot fixes and a change focus. We probably have 20 releases a day. And on top of that, we can do a lot more sharing on components. We can align much more to a common strategy around how all our products are being developed and delivered to our customers. It’s been a massive change.

Gardner: Before we close out, I’d like to think about the future. We have established that IT4IT has backward compatibility, that if you are a legacy-oriented IT department, the reference architecture for IT management can be very powerful for alignment to newer services development and use.

But there are so many new things coming on, such as AIOps, AI, machine learning (ML), and data-driven and analytics-driven business applications. We are also finding increased hybrid cloud and multi-cloud complexity across deployment models. And better managing total costs to best operate across such a hybrid IT environment is also very important.

So, let’s take a pause and say, “Okay, how does IT4IT operate as a powerful influence two to three years from now?” Is IT4IT something that provides future-proofing benefits?

The future belongs to IT4IT 

Bennett: Nothing is future-proof, but I would argue that we really needed IT4IT 20 years ago — and we didn’t have it. And we are now in a pretty big mess.

There is nothing magical here. It’s been well thought-out and well-written, but there is nothing new in there. IT4IT is how it ought to have been for a while and it took a group of people to get together and sit down and architect it out, end-to-end.

Theoretically it could have been done in the 1980s and it would still be relevant because they were doing the same thing. There isn’t anything new in IT, there are lots of new-fangled toys. But that’s all just minutia. The foundation hasn’t changed. I would argue that in 2040 IT4IT will still be relevant.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: Varun, do you feel that organizations that adopt IT4IT are in a better position to grow, adapt, and implement newer technologies and approaches? 

Vijaykumar: Yes, definitely, because IT4IT – although it caters to the traditional IT operating models — also introduces a lot of new concepts that were not in existence earlier. You should look at some of the concepts like service brokering, catalog aggregation, and bringing in the role of a service integrator. All of these are things that may have been in existence, but there was no real structure around them.

IT4IT provides a consolidated framework for us to embrace all of these capabilities and to drive improvements in the industry. Coupled with advances in computing — where everything gets delivered on the fly – and where end users and consumers expect a lot more out of IT, I think IT4IT helps in that direction as well.

Gardner: Lars, looking to the future, how do you think IT4IT will be appreciated by a highly data-driven organization?

Rossen: Well, IT4IT was a data architecture to begin with. So, in that sense it was the first time that IT itself got a data architecture that was generic. Hopefully that gives it a long future.

I also like to think about it as being like roads we are building. We now have the roads to do whatever we want. Eventually you stop caring about it, it’s just there. I hope that 20 years from now nobody will be discussing this, they will just be doing it.

The data model advantage

Gardner: Another important aspect to running a well-greased IT organization — despite the complexity and growing responsibility — is to be better organized and to better understand yourself. That means having better data models about IT. Do you think that IT4IT-oriented shops have an advantage when it comes to better data models about IT?

Bodman: Yes, absolutely. One of the things we just produced within the [IT4IT reference architecture data model] is a reporting capability for key performance indicators (KPI) guidance. We are now able to show what kinds of KPIs you can get from the data model — and be very prescriptive about it.

In the past there had been different camps and different ways of measuring and doing things. Of course, it’s hard to benchmark yourself comprehensively that way, so it’s really important to have consistency there in a way that allows you to really improve.

In the past there had been different camps and different ways of measuring and doing things. It’s hard to benchmark yourself that way. It’s really important to have consistency in a way that allows you to really improve.

The second part — and this is something new in IT4IT that is fundamental — is the value stream has a “request to fulfill (R2F)” capability. It’s now possible to have a top-line, self-service way to engage with IT in a way that’s in a catalog and that is easy to consume and focused on a specific experience. That’s an element that has been missing. It may have been out there in pockets, but now it’s baked in. It’s just fabric, taught in schools, and you just basically implement it.

Rossen: The new R2F capability allows an IT organization to transform, from being a cost center that does what people ask, to becoming a service provider and eventually a service broker, which is where you really want to be.

Esler: I started in this industry in the mainframe days. The concept of shared services was prevalent, so time-sharing, right? It’s the same thing. It hasn’t really changed. It’s evolved and going through different changes, but the advent of the PC in the 1980s didn’t change the model that much.

Now with hyperconvergence, it’s moving back to that mainframe-like thing where you define a machine by software. You can define a data center by software.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: For those listening and reading and who are intrigued by IT4IT and would like to learn more, where can they go and find out more about where the rubber meets the IT road?

Akershoek: The best way is going to The Open Group website. There’s a lot of information on the reference architecture itself, case studies, and video materials. 

How to get started is typically you can do that very small. Look at the materials, try to understand how you currently operate your IT organization, and plot it to the reference architecture.

That provides an immediate sense of what you may be missing, are duplicating areas, or have too much going on without governance. You can begin to create a picture of your IT organization. That’s the first step to try to create or co-create with your own organization a bigger picture and decide where you want to go next.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Bimodal IT, Business intelligence, Cloud computing, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, ITSM, Micro Focus, multicloud, professional services, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

IT kit sustainability: A business advantage and balm for the planet

The next BriefingsDirect sustainable resources improvement interview examines how more companies are plunging into the circular economyto make the most of their existing IT and business assets.

We’ll now hear how more enterprises are optimizing their IT kit and finding innovative means to reduce waste — as well as reduce energy consumption and therefore their carbon footprint. Stay with us as we learn how a circular economy mindset both improves sustainability as a benefit to individual companies as well as the overall environment.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the latest approaches to sustainable IT is William McDonough, Chief Executive of McDonough Innovation and Founder of William McDonough and Partners, and Gabrielle Ginér, Head of Environmental Sustainability for BT Group, based in London. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: William, what are the top trends driving the need for reducing waste, redundancy, and inefficiency in the IT department and data center?

McDonough

McDonough: Materials and energy are both fundamental, and I think people who work in IT systems that are often optimized have difficulty with the concept of waste. What this is about is eliminating the entire concept of waste. So, one thing’s waste is another thing’s food — and so when we don’t waste time, we have food for thought.

A lot of people realize that it’s great to do the right thing, and that would be to not destroy the planet in the process of what you do every day. But it’s also great to do it the right way. When we see the idea of redesigning things to be safe and healthy, and then we find ways to circulate them ad infinitum, we are designing for next use — instead of end of life. So it’s an exciting thing.

Gardner: If my example as an individual is any indication, I have this closet full of stuff that’s been building up for probably 15 years. I have phones and PCs and cables and modems in there that are outdated but that I just haven’t gotten around to dealing with. If that’s the indication on an individual home level, I can hardly imagine the scale of this at the enterprise and business level globally. How big is it?

Devices designed for reuse

McDonough: It’s as big as you think it is, everywhere. What we are looking at is design is the first signal of human intention. If we design these things to be disassembled and reusable, we therefore design for next use. That’s the fundamental shift, that we are now designing differently. We don’t say we design for one-time use: Take, make, waste. We instead design it for what’s next. 

And it’s really important, especially in IT, because these things, in a certain way, they are ephemeral. We call them durables, but they are actually only meant to last a certain amount of time before we move onto the next big thing.

Learn How to Begin

Your IT Circular Economy Journey

If we design your phone in the last 25 years, the odds of you using the same phone for 25 years are pretty low. The notion that we can design these things to become useful again quickly is really part of the new system. We now see the recycling of phone boards that actually go all the way back to base materials in very cost-effective ways. You can mine gold at $210 a ton out there, or you can mine phone boards at about $27,000 a ton. So that’s pretty exciting. 

Gardner: There are clearly economic rationales for doing the right thing. Gabrielle, tell us why this is important to BT as a telecommunications leader.

Ginér

Ginér: We have seen change in how we deal with and talk to consumers about this. We actually encourage them now to return their phones. We are paying for them. Customers can just walk into a store and get money back. That’s a really powerful incentive for people to return their phones.

Gardner: This concept of design for reuse and recovery is part of the cradle-to-cradle design concept that you have helped establish, William. Tell us why your book, Cradle to Cradle, leads to the idea of a circular economy?

Reuse renews the planet

McDonough: When we first posited Cradle to Cradle, we said you can look at the Earth and realize there are two fundamental systems at play. One is the biological system of which we are a part, the natural systems. And in those systems waste equals food. It wants to be safe and healthy, including the things you wear, the water, the food, all those things, those are biological nutrients.

Then we have technology. Once we started banging on rocks and making metals and plastics and things like that, that’s really technical nutrition. It’s another metabolism. So we don’t want to get the two confused. 

When we talk about lifecycle, we like to refer it to living things have a lifecycle. But your telephone is not a living thing — and we talk about it having a lifecycle, and then an end of life. Well, wait a minute, it’s not alive. It talks to you, but it’s not alive. So really it’s a product or service. 

In Cradle to Cradlewe say there are things that our biology needs to be safe, healthy, and to go back to the soil safely. And then there is technology. Technology needs to come back into technology and to be used over and over again. It’s for our use.

And so, this brings up the concept we introduced, which is product-as-a-service. What you actually want from the phone is not 4,600 different kinds of chemicals. You want a telephone you can talk into for a certain period of time. And it’s a service you want, really. And we see this being products-as-services, and that becomes the circular economy.

Once you see that, you design it for that use. Instead of saying, “Design for end-of-life. I am going to throw it in a landfill,” or something, you say, “I design it for next use. That means it’s designed for disassembly. We know we are going to use it again. It becomes part of a circular economy, which will grow the economy because we are doing it again and again.

Gardner: This approach seems to be win-win-win. There are lots of incentives, lots of rationales for not only doing well, but for doing good as companies. For example, Hewlett Packard Enterprise (HPE) recently announced a big initiative about this.

Another part of this in the IT field that people don’t appreciate is the amount of energy that goes into massive data centers. The hyperscale cloud companies are investing billions of dollars each a year in these new data centers. It financially behooves them to consume less energy, but the amount of energy that data centers need is growing at a fantastic rate, and it’s therefore a larger percentage of the overall carbon footprint.

William, do carbon and energy also need to be considered in this whole circular economy equation?

Intelligent energy management

McDonough: Clearly with the issues concerning climate and energy management, yes. If our energy is coming from fossil fuels, we have fugitive carbon in the atmosphere. That’s something that’s now toxic. We know that. A toxin is material in the wrong place, wrong dose, wrong duration, so this has to be dealt with.

Some major IT companies are leading in this, including AppleGoogleFacebook, and BT. This is quite phenomenal, really. They are reducing their energy consumption by being efficient. They are also adding renewables to their mix, to the point that they are going to be a major part of the power use — but it’s renewably sourced and carbon-free. That’s really interesting.

Learn How to Begin

Your IT Circular Economy Journey

When we realize the dynamic of the energy required to move data — and that the people who do this have the possibility of doing it with renewably powered means – this is a harbinger for something really critical. We can do this with renewable energy while still using electricity. It’s not like asking some heating plant to shift gears quickly or some transportation system to change its power systems; those things are good too, but this industry is based on being intelligent and understanding the statistical significance of what you do.

Gardner: Gabrielle, how is BT looking at the carbon and energy equation and helping to be more sustainable, not only in its own operations, but across your supply chain, all the companies that you work with as partners and vendors?

Ginér: Back to William’s point, two things stand out. One, we are focused on being more energy efficient. Even though we are seeing data traffic grow by around 40 percent per year, we now have nine consecutive years of reducing energy consumption in our networks.

To the second point around renewable energy, we have an ambition to be using 100 percent renewable electricity by 2020. Last year we were at 81 percent, and I am pleased to say that we did a couple of new deals recently, and we are now up at 96 percent. So, we are getting there in terms of the renewables.

What’s been remarkable is how we have seen companies come together in coalitions that have really driven the demand and supply of renewable energy, which has been absolutely fantastic.

As for how we work with our suppliers like HPE, for example, as a customer we have a really important role to play in sending demand signals to our suppliers of what we are looking for. And obviously we are looking for our suppliers to be more sustainable. The initiatives that HPE announced recently in Madrid are absolutely fantastic and are what we are looking for.

Gardner: It’s great to hear about companies like BT that are taking a bellwether approach to this leadership position. HPE is being aggressive in terms of how it encourages companies to recycle and use more data center kit that’s been reconditioned so that you get more and more life out of the same resources.

But if you are not aggressive, if you are not on the leadership trajectory in terms of sustainability, what’s the likely outcome in a few years?

Smart, sustainable IT 

McDonough: This is a key question. When a supplier company like HPE says, “We are going to care about this,” what I like about that is it’s a signal that they are providing services. A lot of the companies — when they are trying to survive in business or trying to move through different agendas to manage modern commerce — they may not have time to figure out how to get renewably powered.

But the ones that do know how to manage those things, it becomes just part of a service. That’s a really elegant thing. So, if a company like HPE says, “Okay, how many problems of yours can we solve? Oh, we will solve that one for you, too. Here, you do what you do, we will all do what we do — and we will all do this together.” So, I think the notion that it becomes part of the service is a very elegant thing

As we see AI coming in, we have to remember there is this thing called human intelligence that goes with it, and there is natural intelligence that goes with being in the world.

Gardner: A lot of companies have sustainability organizations, like BT. But how closely are they aligned with the IT organization? Do IT organizations need to create their own sustainability leaders? How should companies drive a more of the point of the arrow in IT department direction?

McDonough: IT is really critical now because it’s at the core of operations. It touches all the information that’s moving through the system. That’s the place where we can inform the activities and our intentions. But the point today is that humans, as we see artificial intelligence (AI) coming in, we have to remember there is this thing called human intelligence that goes with it, and there is a natural intelligence that goes with being in the world.

We should begin with our values of what is the right thing to do. We talked about what’s right and wrong, or what’s good and bad. Aristotle talked about what is less and more; truth in number. So, when we combine these two, you really have to begin with your values first. Do the right thing, and then go to the value, and do it all the right way.

And that means, let’s not get confused. Because if you are being less bad and you think it’s good, you have to stop and think because you are being bad by definition, just less so. So, we get confused.

Circular Economy Report

Guides Users to 

Increased Sustainability

What we really want to be is more good. Let’s do less bad for sure, but let’s also go out and do more good. And the statistical reference points for data are going to come through the IT to help us determine that. So, the IT department is actually the traffic control for good corporate behavior. 

Gardner: Gabrielle, some thoughts about why sustainability is an important driver for BT in general, and maybe some insights into how the IT function in particular can benefit?

Ginér: I don’t think we need a separate sustainability function for IT. It comes back to what William mentioned about values. For BT, sustainability is part of the company’s ethos. We want to see that throughout our organization. I sit in a central team, but we work closely with IT. It’s part of sharing a common vision and a common goal.

Positive actions, profitable results

Gardner: For those organizations planning on a hybrid IT future, where they are making decisions about how much public cloud, private cloud, and traditional IT — perhaps they should be factoring more about sustainability in terms of a lifecycle of products and the all-important carbon and energy equation.

How do we put numbers on this in ways that IT people can then justify on that all-important total cost of ownership and return on investment types of factoring across hybrid IT choices?

McDonough: Since the only constant in modern business life is high-speed change, you have to have change built into your day-to-day operations. And so, what is the change? The change will have an impact. The question is will it have a positive impact or a negative impact? If we look at the business, we want a positive impact economically; for the environment, we would like to have a positive impact there, too.

Since the only constant in modern business life is high-speed change … for the environment we would like to have a positive impact there, too.

When you look at all of that together as one top-line behavior, you realize it’s about revenue generation, not just about profit derivation. So, you are not just trying to squeeze out every penny to get profit, which is what’s leftover. That’s the manager’s job; you are trying to figure out what’s the right thing to do and bring in the revenue, that’s the executive’s job. 

The executives see this and realize it’s about revenue generation actually. And so, we can balance our CAPEX and our OPEX and we can do optimization across it. That means a lot of equipment that’s sitting out there that might be suboptimal is still serviceable. It’s a valuable asset. Let it run but be ready to refurbish it when the time comes. In the meantime, you are going to shift to the faster, better systems that are optimized across the entire platform. Because then you start saving energy, you start saving money, and that’s all there is to it.

Circular Economy Report

Guides Users to 

Increased Sustainability

Gardner: It seems like we are at the right time in the economy, and in the evolution of IT, for the economics to be working in favor of sustainability initiatives. It’s no coincidence that we are seeing at HPE that they are talking more about the economics of IT as well as sustainability issues. They are very closely linked.

Do you have studies at BT that help you make the economic case for sustainability, and not just that it’s the good or proper thing to do?

Ginér: Oh, yes, most definitely. Just last year through our Energy Efficiency Program, we saved 29 million pounds, and since we began looking at this in 2009-2010, we have saved more than 250 million pounds. So, there is definitely an economic case for being energy efficient.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, Networked economy, procurement, supply chain | Tagged , , , , , , , , , , | Leave a comment

Industrial-strength wearables combine with collaboration cloud to bring anywhere expertise to intelligent-edge work

The next BriefingsDirect industrial-edge innovation use-case examines how RealWear, Inc. and Hewlett Packard Enterprise (HPE) MyRoom combine to provide workers in harsh conditions ease in accessing and interacting with the best intelligence.

Stay with us to learn how a hands-free, voice-activated, and multimedia wearable computer solves the last few feet issue for delivering a business’ best data and visual assets to some of its most critical onsite workers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to describe the new high-water mark for wearable augmented collaboration technologies are Jan Josephson, Sales Director for EMEA at RealWear, and John “JT” Thurgood, Director of Sales for UK, Ireland, and Benelux at RealWear. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: A variety of technologies have come together to create the RealWear solution. Tell us why nowadays a hands-free, wearable computer needs to support multimedia and collaboration solutions to get the job done.

Thurgood: Over time, our industrial workers have moved through a digitization journey as they find the best ways to maintain and manage equipment in the field. They need a range of tools and data to do that. So, it could be an engineer wearing personal protective equipment in the field. He may be up on scaffolding. He typically needs a big bundle of paperwork, such as visual schematics, and all kinds of authorization documents. This is typically what an engineer takes into the field. What we are trying to do is make his life easier.

Thurgood

You can imagine it. An engineer gets to an industrial site, gets permission to be near the equipment, and has his schematics and drawings he takes into that often-harsh environment. His hands are full. He’s trying to balance and juggle everything while trying to work his way through that authorization process prior to actually getting on and doing the job – of being an engineer or a technician.

We take that need for physical documentation away from him and put it on an Android device, which is totally voice-controlled and hands-free. A gyroscope built into the device allows specific and appropriate access to all of those documents. He can even freeze at particular points in the document. He can refer to it visually by glancing down, because the screen is just below eye-line.

The information is available but not interfering from a safety perspective, and it’s not stopping him from doing his job. He has that screen access while working with his hands. The speakers in the unit also help guide him via verbal instructions through whatever the process may be, and he doesn’t even have to be looking at documentation.

Learn More About Software-Defined

And Hybrid Cloud Solutions

That Reduce Complexity

He can follow work orders and processes. And, if he hits a brick wall — he gets to a problem where even after following work processes, going through documentation, and it this still doesn’t look right — what does he do? Well, he needs to phone a buddy, right? The way he does that is the visual remote guidance (VRG) MyRoom solution from HPE.

He gets the appropriate expert on the line, and that expert can be thousands of miles away. The expert can see what’s going on through the 16-megapixel camera on the RealWear device. And he can talk him through the problem, even in harsh conditions because there are four noise-canceling microphones on the device. So, the expert can give detailed, real-time guidance as to how to solve the problem.

You know, Dana, typically that would take weeks of waiting for an expert to be available. The cost involved in getting the guy on-site to go and resolve the issue is expensive. Now we are enabling that end-technician to get any assistance he needs, once he is at the right place, at the right time.

Gardner: What was the impetus to create the RealWear HMT-1? Was there a specific use case or demand that spurred the design?

Military inspiration, enterprise adoption

Thurgood: Our chief technology officer (CTO), Dr. Chris Parkinson, was working in another organization that was focused on manufacturing military-grade screens. He saw an application opportunity for that in the enterprise environment.

And it now has wide applicability — whether it’s in the oil and gas industry, automotive, and construction. I’ve even had journalists wanting to use this device, like having a mobile cameraman.

He foresaw a wide range of use-cases, and so worked with a team — with our chief executive officer (CEO), Andy Lowery — to pull together a device. That design is IP66-rated, it’s hardened, and it can be used in all weather, from -20C to 50C, to do all sorts of different jobs.

There was nothing in the marketplace that provides these capabilities. We now have more than 10,000 RealWear devices in the field in all sorts of vertical industries.

The impetus was that there was nothing in the marketplace that provides these capabilities. People today are using iPads and tablets to do their jobs, but their hands are full. You can’t do the rest of the tasks that you may need to do using your hands.

We now have more than 10,000 RealWear devices in the field in all sorts of industrial areas. I have named a few verticals, but we’re discovering new verticals day-by-day.

Gardner: Jan, what were some of the requirements that led you to collaborate with HPE MyRoom and VRG? Why was that such a good fit?

Josephson: There are a couple of things HPE does extremely well in this field. In these remote, expert applications in particular, HPE designed their applications really well from a user experience (UX) perspective.

Josephson

At the end of the day, we have users out there and many of them are not necessarily engineers. So the UX side of an application is very important. You can’t have a lot of things clogging up your screen and making things too complicated. The interface has to be super simple.

The other thing that is really important for our customers is the way HPE does compression with their networked applications. This is essential because many times — if you are out on an oil rig or in the middle of nowhere — you don’t have the luxury of Wi-Fi or a 4G network. You are in the field.

The HPE solution, due to the compression, enables very high-quality video even at very-low bandwidth. This is very important for a lot of our customers. HPE is also taking their platform and enabling it to operate on-premises. That is becoming important because of security requirements. Some of the large users want a complete solution inside of their firewall.

So it’s a very impressive piece of software, and we’re very happy that we are in this partnership with HPE MyRoom.

Gardner: In effect, it’s a cloud application now — but it can become a hybrid application, too.

Connected from the core to the edge

Thurgood: What’s really unique, too, is that HPE has now built-in object recognition within the toolset. So imagine you’re wearing the RealWear HMT-1, you’re looking at a pump, a gas filter, or some industrial object. The technology is now able to identify that object and provide you with the exact work orders and documentation related to it.

We’re now able to expand out from the historic use-case of expert remote visual guidance support into doing so much more. HPE has really pushed the boundaries out on the solution.

Gardner: It’s a striking example of the newfound power of connecting a core cloud capability with an edge device, and with full interactivity. Ultimately, this model brings the power of artificial intelligence (AI) running on a data center to that edge, and so combines it with the best of human intelligence and dexterity. It’s the best of all worlds.

JT, how is this device going to spur new kinds of edge intelligence?

Thurgood: It’s another great question because 5G is now coming to bear as well as Wi-Fi. So, all of a sudden, almost no matter where you are, you can have devices that are always connected via broadband. The connectivity will become ubiquitous.

Learn More About Software-Defined

And Hybrid Cloud Solutions

That Reduce Complexity

Now, what does that do? It means never having an offline device. All of the data, all of your Internet of Things (IoT) analytics and augmented and assisted reality will all be made available to that remote user.

So, we are looking at the superhuman versions of engineers and technicians. Historically you had a guy with paperwork. Now, if he’s always connected, he always has all the right documentation and is able to act and resolve tasks with all of the power and the assistance he needs. And it’s always available right now.

So, yes, we are going to see more intellectual value being moved down to the remote, edge user.

At RealWear, we see ourselves as a knowledge-transfer company. We want the user of this device to be the conduit through which you can feed all cloud-analyzed data. As time goes by, some of the applications will reside in the cloud as well as on the local device. For higher-order analytics there is a hell of a lot of churning of data required to provide the best end results. So, that’s our prediction.

Gardner: When you can extend the best intelligence to any expert around the world, it’s very powerful concept.

For those listening to or reading this podcast, please describe the HMT-1 device. It’s fairly small and resides within a helmet.

Using your headwear

Thurgood: We have a horseshoe-shaped device with a screen out in front. Typically, it’s worn within a hat. Let’s imagine, you have a standard cap on your head. It attaches to the cap with two clips on the sides. You then have a screen that protrudes from the front of the device that is held just below your eye-line. The camera is mounted on the side. It becomes a head-worn tablet computer.

It can be worn in hard hats, bump caps, normal baseball caps, or just with straps (and no hat). It performs regardless of the environment you are in — be that in wind, rain, gales, such as working out on an offshore oil and gas rig. Or if you are an automotive technician, working in a noisy garage, it simply complements the protective equipment you need to use in the field.

Gardner: When you can bring this level of intelligence and instant access of experts to the edge, wherever it is, you’re talking about new economics. These type of industrial use cases can often involve processes where downtime means huge amounts of money lost. Quickly intercepting a problem and solving it fast can make a huge difference.

Do you have examples that provide a sense of the qualitative and quantitative benefits when this is put to good use?

Thurgood: There are a number of examples. Take automotive to start with. If you have a problem with your vehicle today, you typically take it to a dealership. That dealer will try to resolve the issue as quickly as it can. Let’s say the dealership can’t. There is a fault on the car that needs some expert assistance. Today, the dealership phones the head office and says, “Hey, I need an expert to come down and join us. When can you join us?” And there is typically a long delay.

So, what does that mean? That means my vehicle is off the road. It means I have to have a replacement vehicle. And that expert has to come out from head office to spend time traveling to be on-site to resolve the issue.

What can happen now using the RealWear device in conjunction with the HPE VRG MyRoom is that the technician contacts the expert engineer remotely and gets immediate feedback and assistance on resolving the fault. As you can imagine, the customer experience is vastly improved based on resolving the issue in minutes – and not hours, days, or even weeks.

Josephson: It’s a good example because everyone can relate to a car. Also, nowadays the car manufacturers are pushing a lot more technology into the cars. They are almost computers on wheels. When a car has a problem, chances are very slim you will have the skill-set needed in that local garage.

The whole automotive industry has a big challenge because they have all of these people in the field who need to learn a lot. Doing it the traditional way — of getting them all into a classroom for six weeks — just doesn’t cut it. So, it’s now all about incident-based, real-time learning.

Another benefit is that we can record everything in MyRoom. So if I have a session that solves a particular problem, I can take that recording and I have a value of one-to-many rather than one-to-one. I can begin building up my intellectual property, my FAQs, my better customer service. A whole range of values are being put in front here.

Gardner: You’re creating an archive, not just a spot solution. That archive can then be easily accessible at the right time and any place.

Josephson: Right.

Gardner: For those listeners wondering whether RealWear and VRG are applicable to their vertical industry, or their particular problem set, what are couple of key questions that they might ask themselves?

Shared know-how saves time and money

Thurgood: Do your technicians and engineers need to use their hands? Do they need to be hands-free? If so, you need a device like this. It’s voice-controlled, it’s mounted on your head.

Do they wear personal protectant equipment (PPE)? Do they have to wear gloves? If so, it’s really difficult to use a stylus or poke the screen of a tablet. With RealWear, we provide a totally hands-free, eyes-forward, very safe deployment of knowledge-transfer technology in the field.

If you need your hands free in the field, or if you’re working outdoors, up on towers and so on, it’s a good use of the device.

Josephson: Also, if your business includes field engineers that travel, do you have many traveling days where you had to go back because you forgot something, or it wasn’t the right skill-set on the first trip?

If instead you can always have someone available via the device to validate what we think is wrong and actually potentially fix it, I mean, it’s a huge savings. Fewer return or duplicate trips. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, mobile computing | Tagged , , , , , , , , | Leave a comment

How the data science profession is growing in value and impact across the business world

The next BriefingsDirect business trends panel discussion explores how the role of the data scientist in the enterprise is expanding in both importance and influence.

Data scientists are now among the most highly sought-after professionals, and they are being called on to work more closely than ever with enterprise strategists to predict emerging trends, optimize outcomes, and create entirely new kinds of business value.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about modern data scientists, how they operate, and why a new level of business analysis professional certification has been created by The Open Group, we are joined by Martin Fleming, Vice President, Chief Analytics Officer, and Chief Economist at IBMMaureen Norton, IBM Global Data Scientist Professional Lead, Distinguished Market Intelligence Professional, and author of Analytics Across the Enterprise, and George Stark, Distinguished Engineer for IT Operations Analytics at IBM. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are now characterizing the data scientist as a profession. Why have we elevated the role to this level, Martin? 

Fleming

Fleming: The benefits we have from the technology that’s now available allow us to bring together the more traditional skills in the space of mathematics and statistics with computer science and data engineering. The technology wasn’t as useful just 18 months ago. It’s all about the very rapid pace of change in technology.

Gardner: Data scientists used to be behind-the-scenes people; sneakers, beards, white lab coats, if you will. What’s changed to now make them more prominent?

Norton

Norton: Today’s data scientists are consulting with the major leaders in each corporation and enterprise. They are consultants to them. So they are not in the back room, mulling around in the data anymore. They’re taking the insights they’re able to glean and support with facts and using them to provide recommendations and to provide insights into the business.

Gardner: Most companies now recognize that being data-driven is an imperative. They can’t succeed in today’s world without being data-driven. But many have a hard time getting there. It’s easier said than done. How can the data scientist as a professional close that gap?

Stark

Stark: The biggest drawback in integration of data sources is having disparate data systems. The financial system is always separate from the operational system, which is separate from the human resources (HR) system. And you need to combine those and make sure they’re all in the same units, in the same timeframe, and all combined in a way that can answer two questions. You have to answer, “So what?” And you have to answer, “What if?” And that’s really the challenge of data science.

Gardner: An awful lot still has to go on behind the scenes before you get to the point where the “a-ha” moments and the strategic inputs take place.

Martin, how will the nature of work change now that the data scientist as a profession is arriving – and probably just at the right time?

Fleming: The insights that data scientists provide allow organizations to understand where the opportunities are to improve productivity, of how they can help to make workers more effective, productive, and to create more value. This enhances the role of the individual employees. And it’s that value creation, the integration of the data that George talked about, and the use of analytic tools that’s driving fundamental changes across many organizations.

Captain of the data team

Gardner: Is there any standardization as to how the data scientist is being organized within companies? Do they typically report to a certain C-suite executive or another? Has that settled out yet? Or are we still in a period of churn as to where the data scientist, as a professional, fits in?

Business women touching the certification screen

Norton: We’re still seeing a fair amount of churn. Different organizing approaches have been tried. For example, the centralized center of excellence that supports other business units across a company has a lot of believers and followers.

The economies of scale in that approach help. It’s difficult to find one person with all of the skills you might need. I’m describing the role of consultant to the presidents of companies. Sometimes you can’t find all of that in one individual — but you can build teams that have complimentary skills. We like to say that data science is a team sport.

Gardner: George, are we focusing the new data scientist certification on the group or the individual? Have we progressed from the individual to the group yet?

Stark: I don’t believe we are there yet. We’re still certifying at the individual level. But as Maureen said, and as Martin alluded to, the group approach has a large effect on how you get certified and what kinds of solutions you come up with. 

Gardner: Does the certification lead to defining the managerial side of this group, with the data scientist certified in organizing in a methodological, proven way that group or office?

Learn How to Become

Certified as a

Data Scientist

Fleming: The certification we are announcing focuses not only on the technical skills of a data scientist, but also on project management and project leadership. So as data scientists progress through their careers, the more senior folks are certainly in a position to take on significant leadership and management roles.

And we are seeing over time, as George referenced, a structure beginning to appear. First in the technology industry, and over time, we’ll see it in other industries. But the technology firms whose names we are all familiar with are the ones who have really taken the lead in putting the structure together.

Gardner: How has the “day in the life” of the typical data scientist changed in the last 10 years?

Stark: It’s scary to say, but I have been a data scientist for 30 years. I began writing my own Fortran 77 code to integrate datasets to do eigenvalues and eigenvectors and build models that would discriminate among key objects and allow us to predict what something was.

The difference today is that I can do that in an afternoon. We have the tools, datasets, and all the capabilities with visualization tools, SPSSIBM Watson, and Tableau. Things that used to take me months now take a day and a half. It’s incredible, the change.

Gardner: Do you as a modern data scientist find yourself interpreting what the data science can do for the business people? Or are you interpreting what the business people need, and bringing that back to the data scientists? Or perhaps both?

Collaboration is key

Stark: It’s absolutely both. I was recently with a client, and we told them, “Here are some things we can do today.” And they said, “Well, what I really need is something that does this.” And I said, “Oh, well, we can do that. Here’s how we would do it.” And we showed them the roadmap. So it’s both. I will take that information back to my team and say, “Hey, we now need to build this.”

Gardner: Is there still a language, culture, or organizational divide? It seems to me that you’re talking apples and oranges when it comes to business requirements and what the data and technology can produce. How can we create a Rosetta Stone effect here?

Norton: In the certification, we are focused on supporting that data scientists have to understand the business problems. Everything begins from that.

In the certification, we are focused on supporting that data scientists have to understand the business problems. Everything begins from that. Knowing how to ask the right questions, to scope the problem, and be able to then translate is essential.

Knowing how to ask the right questions, to scope the problem, and be able to then translate [is essential]. You have to look at the available data and infer some, to come up with insights and a solution. It’s increasingly important that you begin with the problem. You don’t begin with your solution and say, “I have this many things I can work with.” It’s more like, “How we are going to solve this and draw on the innovation and creativity of the team?”

Gardner: I have been around long enough to remember when the notion of a chief information officer (CIO) was new and fresh. There are some similarities to what I remember from those conversations in what I’m hearing now. Should we think about the data scientist as a “chief” something, at the same level as a chief technology officer (CTO) or a CIO?

Chief Data Officer defined 

Fleming: There are certainly a number of organizations that have roles such as mine, where we’ve combined economics and analytics. Amazon has done it on a larger scale, given the nature of their business, with supply chains, pricing, and recommendation engines. But other firms in the technology industry have as well.

We have found that there are still three separate needs, if you will. There is an infrastructure need that CIO teams are focused on. There are significant data governance and management needs that typically chief data officers (CDOs) are focused on. And there are substantial analytics capabilities that typically chief analytics officers (CAOs) are focused on.

It’s certainly possible in many organizations to combine those roles. But in an organization the size of IBM, and other large entities, it’s very difficult because of the complexity and requirements across those three different functional areas to have that all embodied in a single individual.

Gardner: In that spectrum you just laid out – analytics, data, and systems — where does The Open Group process for a certified data scientist fit in?

Fleming: It’s really on the analytics side. A lot of what CDOs do is data engineering, creating data platforms. At IBM, we use the term Watson Data Platform because it’s built on a certain technology that’s in the public cloud. But that’s an entirely separate challenge from being able to create the analytics tools and deliver the business insights and business value that Maureen and George referred to.

Gardner: I should think this is also going to be of pertinent interest to government agencies, to nonprofits, to quasi-public-private organizations, alliances, and so forth.

Given that this has societal-level impacts, what should we think about in improving the data scientists’ career path? Do we have the means of delivering the individuals needed from our current educational tracks? How do education and certification relate to each other?

Academic avenues to certification

Fleming: A number of universities have over the past three or four years launched programs for a master’s degree in data science. We are now seeing the first graduates of those programs, and we are recruiting and hiring.

I think this will be the first year that we bring in folks who have completed a master’s in data science program. As we all know, universities change very slowly. It’s the early days, but demand will continue to grow. We have barely scratched the surface in terms of the kinds of positions and roles across different industries. 

That growth in demand will cause many university programs to grow and expand to feed that career track. It takes 15 years to create a profession, so we are in the early days of this.

Norton: With the new certification, we are doing outreach to universities because several of them have master’s in data analytics programs. They do significant capstone-type projects, with real clients and real data, to solve real problems.

We want to provide a path for them into certification so that students can earn, for example, their first project profile, or experience profile, while they are still in school.

Gardner: George, on the organic side — inside of companies where people find a variety of tracks to data scientist — where do the prospects come from? How does organic development of a data scientist professional happen inside of companies?

Stark: At IBM, in our group, Global Services, in particular, we’ve developed a training program with a set of badges. They get rewarded for achievement in various levels of education. But you still need to have projects you’ve done with the techniques you’ve learned througheducation to get to certification.

Having education is not enough. You have to apply it to get certified.

Gardner: This is a great career path, and there is tremendous demand in the market. It also strikes me as a very fulfilling and rewarding career path. What sorts of impacts can these individuals have?

Learn How to Become

Certified as a

Data Scientist

Fleming: Businesses have traditionally been managed through a profit-and-loss statement, an income statement, for the most part. There are, of course, other data sources — but they’re largely independent of each other. These include sales opportunity information in a CRM system, supply chain information in ERP systems, and financial information portrayed in an income statement. These get the most rigorous attention, shall we say.

We’re now in a position to create much richer views of the activity businesses are engaged in. We can integrate across more datasets now, including human resources data. In addition, the nature of machine learning (ML) and artificial intelligence (AI) are predictive. We are in a position to be able to not only bring the data together, we can provide a richer view of what’s transpiring at any point in time, and also generate a better view of where businesses are moving to.

It may be about defining a sought-after destination, or there may be a need to close gaps. But understanding where the business is headed in the next 3, 6, 9, and 12 months is a significant value-creation opportunity.

Gardner: Are we then thinking about a data scientist as someone who can help define what the new, best business initiatives should be? Rather than finding those through intuition, or gut instinct, or the highest paid person’s opinion, can we use the systems to tell us where our next product should come from?

Pioneers of insight

Norton: That’s certainly the direction we are headed. We will have systems that augment that kind of decision-making. I view data scientists as pioneers. They’re able to go into big datadark data, and a lot of different places and push the boundaries to come out with insights that can inform in ways that were not possible before.

It’s a very rewarding career path because there is so much value and promise that a data scientist can bring. They will solve problems that hadn’t been addressed before.

It’s a very exciting career path. We’re excited to be launching the certification program to help data scientists gain a clear path and to make sure they can demonstrate the right skills.

I’s a very rewarding career path because there is so much value and promise that a data scientist can bring. They will solve problems that hadn’t been addressed before.

Gardner: George, is this one of the better ways to change the world in the next 30 years?

Stark: I think so. If we can get more people to do data science and understand its value, I’d be really happy. It’s been fun for 30 years for me. I have had a great time.

Gardner: What comes next on the technology side that will empower the date scientists of tomorrow? We hear about things like quantum computingdistributed ledger, and other new capabilities on the horizon? 

Future forecast: clouds

Fleming: In the immediate future, new benefits are largely coming because we have both public cloud and private cloud in a hybrid structure, which brings the data, compute, and the APIs together in one place. And that allows for the kind of tools and capabilities that necessary to significantly improve the performance and productivity of organizations. 

Blockchain is making enormous progress and very quickly. It’s essentially a data management and storage improvement, but then that opens up the opportunity for further ML and AI applications to be built on top of it. That’s moving very quickly. 

Quantum computing is further down the road. But it will change the nature of computing. It’s going to take some time to get there but it nonetheless is very important and is part of that what we are looking at over the horizon. 

Gardner: Maureen, what do you see on the technology side as most interesting in terms of where things could lead to the next few years for data science? 

Norton: The continued evolution of AI is pushing boundaries. One of the really interesting areas is the emphasis on transparency and ethics, to make sure that the systems are not introducing or perpetuating a bias. There is some really exciting work going on in that area that will be fun to watch going forward. 

Gardner: The data scientist needs to consider not just what canbe done, but what should be done. Is that governance angle brought into the certification process now, or something that it will come later?

Stark: It’s brought into the certification now when we ask about how were things validated and how did the modules get implemented in the environment? That’s one of the things that data scientists need to answer as part of the certification. We also believe that in the future we are going to need some sort of code of ethics, some sort of methods for bias-detection and analysis, the measurement of those things that don’t exist today and that will have to.

Gardner: Do you have any examples of data scientists doing work that’s new, novel, and exciting?

Rock star potential

Fleming: We have a team led by a very intelligent and aggressive young woman who has put together a significant product recommendation tool for IBM. Folks familiar with IBM know it has a large number of products and offerings. In any given client situation the seller wants to be able to recommend to the client the offering that’s most useful to the client’s situation. 

And our recommendation engines can now make those recommendations to the sellers.  It really hasn’t existed in the past and is now creating enormous value — not only for the clients but for IBM as well. 

Gardner: Maureen any examples jump to mind that illustrate the potential for the data scientist? 

Norton: We wrote a book, Analytics Across the Enterprise, to explain examples across nine different business units. There have been some great examples in terms of finance, sales, marketing, and supply chain.

Learn How to Become

Certified as a

Data Scientist

Gardner: Any use-case scenario come to mind where the certification may have been useful?

Norton: Certification would have been useful to an individual in the past because it helps map out how to become the best practitioner you can be. We have three different levels of certification going up to the thought leader. It’s designed to help that professional grow within it.

Stark: A young man who works for me in Brazil built a model for one of our manufacturing clients that identifies problematic infrastructure components and recommends actions to take on those components. And when the client implemented the model, they saw a 60 percent reduction in certain incidents and a 40,000-hour-a-month increase in availability for their supply chain. And we didn’t have a certification for him then — but we will have now. 

Gardner: So really big improvement. It shows that being a data scientist means you’re impactful and it puts you in the limelight.

IBM has built an internal process that matches with The Open Group. Other companies are getting accredited for running a version of the certification themselves, too.

Stark: And it was pretty spectacular because the CIO for that company stood up in front of his whole company — and in front of a group of analysts — and called him out as the data scientist that solved this problem for their company. So, yeah, he was a rock star for a couple days. 

Gardner: For those folks who might be more intrigued with a career path toward certification as a data scientist, where might they go for more information? What are the next steps when it comes to the process through The Open Group, with IBM, and the industry at large? 

Where to begin

Norton: The Open Group officially launched this in January, so anyone can go to The Open Group website and check under certifications. They will be able to read the information about how to apply. Some companies are accredited, and others can get accredited for running a version of the certification themselves. 

IBM recently went through the certification process. We have built an internal process that matches with The Open Group. People can apply either directly to The Open Group or, if they happen to be within IBM or one of the other companies who will certify, they can apply that way and get the equivalent of it being from The Open Group. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Enterprise architect, enterprise architecture, Enterprise transformation, Hadoop, IBM, machine learning, The Open Group | Tagged , , , , , , , , , , , , , | Leave a comment

Why enterprises should approach procurement of hybrid IT in entirely new ways

The next BriefingsDirect hybrid IT management strategies interview explores new ways that businesses should procure and consume IT-as-a-service. We’ll now hear from an IT industry analyst on why changes in cloud deployment models are forcing a rethinking of IT economics — and maybe even the very nature of acquiring and cost-optimizing digital business services.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the everything-as-a-service business model is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving change in the procurement of hybrid- and multi-cloud services?

Dillingham

Dillingham: What began as organic adoption — from the developers and business units seeking agility and speed — is now coming back around to the IT-focused topics of governance, orchestration across platforms, and modernization of private infrastructure.

There is also interest in hybrid cloud, as well as multi-cloud management and governance. Those amount to complexities that the public clouds are not set up for and are not able to address because they are focused on their own platforms.

Learn How to

Better Manage

Multi-Cloud Sprawl

Gardner: So the way you acquire IT these days isn’t apples or oranges, public or private, it’s more like … fruit salad. There are so many different ways to acquire IT services that it’s hard to measure and to optimize. 

Dillingham: And there are trade-offs. Some organizations are focused on and adopt a single public cloud vendor. But others see that as a long-term risk in management, resourcing, and maintaining flexibility as a business. So they’re adopting multiple cloud vendors, which is becoming the more popular strategic orientation.

Gardner: For those organizations that don’t want mismanaged “fruit salad” — that are trying to homogenize their acquisition of IT services even as they use hybrid cloud approaches — does this require a reevaluation of how IT in total is financed? 

Champion the cloud

Dillingham: Absolutely, and that’s something you can address, regardless of whether you’re adopting a single cloud or multiple clouds. The more you use multiple resources, the more you are going to consider tools that address multiple infrastructures — and not base your capabilities on a single vendor’s toolset. You are going to go with a cloud management vendor that produces tools that comprehensively address security, compliance, cost management, and monitoring, et cetera.

Gardner: Does the function of IT acquisitions now move outside of IT? Should companies be thinking about a chief procurement officer (CPO) or chief financial officer (CFO) becoming a part of the IT purchasing equation?

Dillingham: By virtue of the way cloud has been adopted — more by the business units – they got ahead of IT in many cases. This has been pushed back toward gaining the fuller financial view. That move doesn’t make the IT decision-maker into a CFO as much as turn them into a champion of IT. And IT goes back to being the governance arm, where traditionally they been managing cost, security, and compliance.

It’s natural for the business units and developers to now look to IT for the right tools and capabilities, not necessarily to shed accountability but because that is the traditional role of IT, to enable those capabilities. IT is therefore set up for procurement.

IT is best set up to look at the big picture across vendors and across infrastructures rather than the individual team-by-team or business unit-by-business unit decisions that have been made so far. They need to aggregate the cloud strategy at the highest organizational level.

Gardner: A central tenet of good procurement is to look for volume discounts and to buy in bulk. Perhaps having that holistic and strategic approach to acquiring cloud services lends itself to a better bargaining position? 

Learn How to

Make Hybrid IT

Simple

Dillingham: That’s absolutely the pitch of a cloud-by-cloud vendor approach, and there are trade-offs. You can certainly aggregate more spend on a single cloud vendor and potentially achieve more discounts in use by that aggregation.

The rebuttal is that on a long-term basis, your negotiating leverage in that relationship is constrained versus if you have adopted multiple cloud infrastructures and can dialogue across vendors on pricing and discounting.

Now, that may turn into more of an 80/20-, 90/10-split than a 50/50-split, but at least by having some cross-infrastructure capability — by setting yourself up with orchestration, monitoring, and governance tools that run across multiple clouds — you are at least in a strategic position from a competitive sourcing perspective.

The trade-off is the cost-aggregation and training necessary to understand how to use those different infrastructures — because they do have different interfaces, APIs, and the automation is different.

Gardner: I think that’s why we’ve seen vendors like Hewlett Packard Enterprise (HPE) put an increased emphasis on multi-cloud economics, and not just the capability to compose cloud services. The issues we’re bringing up force IT to rethink the financial implications, too. Are the vendors on to something here when it comes to providing insight and experience in managing a multi-cloud market?

Follow the multi-cloud tour guide

Dillingham: Absolutely, and certainly from the perspective that when we talk multi-cloud, we are not just talking multiple public clouds. There is a reality of large existing investments in private infrastructure that continue for various purposes. That on-premises technology also needs cost optimization, security, compliance, auditability, and customization of infrastructure for certain workloads.

Consultative input is very valuable when you see how much pattern-matching there is across customers — and not just within the same industry but cross industries.

That means the ultimate toolset to be considered needs to work across both public and private infrastructures. A vendor that’s looking beyond just public cloud, like HPE, and delivers a multi-cloud and hybrid cloud management orientation is set up to be a potential tour guide and strategic consultative adviser. 

And that consultative input is very valuable when you see how much pattern-matching there is across customers – and not just within same industry but across industries. The best insights will come from knowing what it looks like to triage application portfolios, what migrations you want across cloud infrastructures, and the proper set up of comprehensive governance, control processes, and education structures.

Gardner: Right. I’m sure there are systems integrators, in addition to some vendors, that are going to help make the transition from traditional IT procurement to everything-as-a service. Their lessons learned will be very valuable.

That’s more intelligent than trying to do this on your own or go down a dark alley and make mistakes, because as we know, the cloud providers are probably not going to stand up and wave a flag if you’re spending too much money with them.

How to Solve Cost and Utilization

Challenges of

Hybrid Cloud

Dillingham: Yes, and the patterns of progression in cloud orientation are clear for those consultative partners, based on dozens of implementations and executions. From that experience they are far more thoroughly aware of the patterns and how to avoid falling into the traps and pitfalls along the way, more so than a single organization could expect, internally, to be savvy about.

Gardner: It’s a fast-moving target. The cloud providers are bringing out new services all the time. There are literally thousands of different cloud service SKUs for infrastructure-as-a-service, for storage-as-a-service, and for other APIs and third-party services. It becomes very complex, very dynamic.

Do you have any advice for how companies should be better managing cloud adoption? It seems to me there should be collaboration at a higher level, or a different type of management, when it comes to optimizing for multi-cloud and hybrid-cloud economics.

Cloud collaboration strategy 

Dillingham: That really comes back to the requirements of the specific IT organization. The more business units there are in the organization, the more IT is critical in driving collaboration at the highest organizational level and in being responsible for the overall cloud strategy.

Remove Complexity

From Multi-Cloud

And Hybrid IT 

The cloud strategy across the topics of platform selection, governance, process, and people skills — that’s the type of collaboration needed. And it flows into these recommendations from the consultancies of how to avoid the traps and pitfalls, mismanage expectations and goals, resulting in clear outcomes on execution of projects. And it means making sure that security and compliance are considered and involved from a functional perspective – and all the way down the list on making it progress as a long-term success.

The decision of what advice to bring in is really about the topic and the selection on the menu. Have you considered the uber strategy and approach? How well have you triaged your application portfolio? How can you best match capabilities to apps across infrastructures and platforms?

Do you have migration planning? How about migration execution? Those can be similar or separate items. You also have development methodologies, and the software platform choices to best support all of that along with security and compliance expertise. These are all aspects certain consultancies will have expertise on more than others, and not many are going to be strong across all of them. 

Gardner: It certainly sounds like a lot of planning and perhaps reevaluating the ways of the past. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, multicloud, supply chain, User experience | Tagged , , , , , , , , | Leave a comment

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

The next BriefingsDirect manufacturing modernization and optimization case study centers on how a Canadian maker of containers leverages the Internet of Things (IoT) to create a positive cycle of insights and applied learning.

We will now hear how CuBE Packaging Solutions, Inc. in Aurora, Ontario has deployed edge intelligence to make 21 formerly isolated machines act as a single, coordinated system as it churns out millions of reusable package units per month.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we explore how harnessing edge data with more centralized real-time analytics integration cooks up the winning formula for an ongoing journey of efficiency, quality control, and end-to-end factory improvement.

Here to describe the challenges and solutions for injecting IoT into a plastic container’s production advancement success journey is Len Chopin, President at CuBE Packaging Solutions. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Len, what are the top trends and requirements driving the need for you to bring more insights into your production process?

Chopin

Chopin: The very competitive nature of injection molding requires us to stay ahead of the competition and utilize the intelligent edge to stay abreast of that competition. By tapping into and optimizing the equipment, we gain on downtime efficiencies, improved throughput, and all the things that help drive more to the bottom line.

Gardner: And this is a win-win because you’re not only improving quality but you’re able to improve the volume of output. So it’s sort of the double-benefit of better and bigger.

Chopin: Correct. Driving volume is key. When we are running, we are making money, and we are profitable. By optimizing that production, we are even that much more profitable. And by using analytics and protocols for preventive and predictive maintenance, the IoT solutions drive an increase the uptime on the equipment.

Gardner: Why are sensors in of themselves not enough to attain intelligence at the edge?

Chopin: The sensors are reactive. They give you information. It’s good information. But leaving it up to the people to interpret [the sensor data] takes time. Utilizing analytics, by pooling the data, and looking for trends, means IoT is pushing to us what we need to know and when.

Otherwise we tend to look at a lot of information that’s not useful. Utilizing the intelligent edge means it’s pushing to us the information we need, when we need it, so we can react appropriately with the right resources.

Gardner: In order to understand the benefits of when you do this well, let’s talk about the state at CuBE Packaging when you didn’t have sensors. You weren’t processing and you weren’t creating a cycle of improvement?

Learn How to Automate and

Drive Insights

From Your IIoT Data and Apps

Chopin: That was firefighting mode. You really have no idea of what’s running, how it’s running, is it trending down, is it fast enough, and is it about to go down. It equates to flying blind, with blinders on. It’s really hard in a manufacturing environment to run a business that way. A lot of people do it, and it’s affordable — but not very economical. It really doesn’t drive more value to the bottom line.

Gardner: What have been the biggest challenges in moving beyond that previous “firefighting” state to implementing a full IoT capability?

Chopin: The dynamic within our area in Canada is resources. There is lot of technology out there. We rise to the top by learning all about what we can do at the edge, how we can best apply it, and how we can implement that into a meaningful roadmap with the right resources and technical skills of our IT staff.

It’s a new venture for us, so it’s definitely been a journey. It is challenging. Getting that roadmap and then sticking to the roadmap is challenging, but as we go through the journey we learn the more relevant things. It’s been a dynamic roadmap, which it has to be as the technology evolves and we delve into the world of IoT, which is quite fascinating for us.

Gardner: What would you say has been the hardest nut to crack? Is it the people, the process, or the technology? Which has been the most challenging?

Trust the IoT process 

Chopin: I think the process, the execution. But we found that once you deploy IoT, and you begin collecting data and embarking on analytics, then the creative juices become engaged with a lot of the people who previously were disinterested in the whole process.

But then they help steer the ship, and some will change the direction slightly or identify a need that we previously didn’t know about – a more valuable way than the path we were on. So people are definitely part of the solution, not part of the problem. For us, it’s about executing to their new expectations and applying the information and technology to find solutions to their specific problems.

We have had really good buy-in with the people, and it’s just become about layering on the technical resources to help them execute their vision.

Gardner: You have referred to becoming, “the Google of manufacturing.” What do you mean by that, and how has Hewlett Packard Enterprise (HPE) supported you in gaining that capability and intelligence?

People are definitely part of the solution, not part of the problem. For us, it’s about executing to their new expectations and applying information and technology to find solutions to specific problems.

Chopin: “The Google of manufacturing” was first coined by our owner, JR. It’s his vision so it’s my job to bring it to fruition. The concept is that there’s a lot of cool stuff out there, and we see that IoT is really fascinating.

My job is to take that technology and turn it into an investment with a return on investment (ROI) from execution. How is that all going to help the business? The “Google of manufacturing” is about growth for us — by using any technology that we see fit and having the leadership to be open to those new ideas and concepts. Even without having a clear vision of the roadmap, it means focusing on the end results. It’s been a unique situation. So far it’s been very good for us.

Gardner: How has HPE helped in your ability to exploit technologies both at the edge and at the data center core?

Chopin: We just embarked on a large equipment expansion [with HPE], which is doubling our throughput. Our IT backbone, our core, was just like our previous equipment — very old, antiquated, and not cutting edge at all. It was a burden as opposed to an asset.

Part of moving to IoT was putting in a solid platform, which HPE has provided. We work with our integrator and a project team that mapped out our core for the future. It’s not just built for today’s needs — it’s built for expansion capabilities. It’s built for year-two, year-three. Even if we’re not fully utilizing it today — it has been built for the future.

HPE has more things coming down the pipeline that are built on and integrated to this core, so that there are no real limitations to the system. No longer will we have to scrap an old system and put a new one in. It’s now scalable, which we think of as the platform for becoming the “Google of manufacturing,” and which is going to be critical for us.

Gardner: Future-proofing infrastructure is always one of my top requirements. All right, please tell us about CuBE Packaging, your organization’s size, what you’re doing, and what end products are.

The CuBE takeaway

Chopin: We have a 170,000-square-foot facility, with about 120 employees producing injection-molded plastic containers for the food service industry, for home-meal replacement, and takeout markets, distributed to Canada as well as the US, which is obviously a huge and diverse market.

We also have a focus on sustainability. Our products are reusable and recyclable. They are a premier product that come with a premier price. They are also customizable and brandable, which has been a key to CuBE’s success. We partner with restaurants, with sophisticated customers, who see a value in the specific branding and of having a robust packaging solution.

Gardner: Len, you mentioned that you’re in a competitive industry and that margin is therefore under pressure. For you to improve your bottom line, how do you account for more productivity? How are you turning what we have described in terms of an IoT and data capability into that economic improvement to your business outcome?

Chopin: I refer to this as having a plant within a plant. There is always lot more you can squeeze out of an operation by knowing what it’s up to, not day-by-day, but minute-by-minute. Our process is run quite quickly and so slippage in machine cycle times can occur rapidly. We must grasp the small slippages, or predict failures, or when something is out of technical specifications from the injection molding standpoint or we could be producing a poor-quality product.

Getting a handle on what the machines are doing, minute-by-minute-by-minute gives us the advantage to utilize the assets and the people and so to optimize the uptime, as well as improve our quality, to get more of the best product to the market. So it really does drive value right to the bottom line.

Learn How to Automate and

Drive Insights

From Your IIoT Data and Apps

Gardner: A big buzzword in the industry now is artificial intelligence (AI). We are seeing lots of companies dabble in that. But you need to put in place certain things before you can take advantage of those capabilities that not only react but have the intelligence to prescribe new processes for doing things even more efficiently.

Are you working in conjunction with your integrator and HPE to allow you to exploit AI when it becomes mature enough for organizations like your own?

AI adds uptime 

Chopin: We are already embarking on using sensors for things that were seemingly unrelated. For example, we are picking up data points off of peripheral equipment that feed into the molding process. This provides us a better handle on those inputs to the process, inputs to the actual equipment, rather than focusing on the throughput and of how many parts we get in a given day.

For us, the AI is about that equipment uptime and of preventing any of it going down. By utilizing the inputs to the machines, it can notify us in advance, when we need to be notified.

Rather than monitoring equipment performance manually with a clipboard and a pen, we can check on run conditions or temperatures of some equipment up on the roof that’s critical to the operation. The AI will hopefully alert us to things that we don’t know about or don’t see because it could be at the far end of the operations. Yet there is a codependency on a lot of that pre-upstream equipment that feeds to the downstream equipment.

So for us to gain transparency into that end-to-end process and having intelligence built-in enough to say, “Hey, you have a problem — not yet, but you’re going to have a problem,” allows us to react before the problem occurs and causes a production upset.

Rather than monitoring equipment performance manually with a clipboard and a pen, we can check on run conditions or temperatures of some equipment up on the roof if that is critical to the operations.

Gardner: You can attain a total data picture across your entire product lifecycle, and your entire production facility. Having that allows you to really leverage AI.

Sounds like that means a lot of data over long period of time. Is there anything about what’s happening at that data center core, around storage, that makes it more attractive to do this sooner than later?

Chopin: As I mentioned previously, there are a lot of data points coming off the machines. The bulk of it is useless, other than from an historical standpoint. So by utilizing that data — not pushing forward what we don’t need, but just taking the relevant points — we piggyback on the programmable logic controllers to just gather the data that we need. Then we further streamline that data to give us what we’re looking for within that process.

It’s like pushing out only the needle from the haystack, as opposed to pushing the whole haystack forward. That’s the analogy we use.

Gardner: So being more intelligent about how you gather intelligence?

Chopin: Absolutely! Yes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, Internet of Things | Tagged , , , , , , , , , | Leave a comment