How AI, IoT and blockchain will shake up procurement and supply chains

The next BriefingsDirect digital business thought leadership panel discussion focuses on how artificial intelligence (AI), the Internet of things (IoT), machine learning (ML), and blockchain will shake up procurement and supply chain optimization.

Stay with us now as we develop a new vision for how today’s cutting-edge technologies will usher in tomorrow’s most powerful business tools and processes. The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the data-driven, predictive analytics, and augmented intelligence approach to supply chain management and procurement, please welcome the executives from SAP Ariba:

Here are some excerpts:

Gardner: It seems like only yesterday we were confident to have a single view of a customer, or clean data, or maybe a single business process end–to-end value. But now, we are poised to leapfrog the status quo by using words like predictive and proactive for many business functions.

Why are AI and ML such disrupters to how we’ve been doing business processes?

Shahane: If you look back, some of the technological impact  in our private lives, is impacting our public life. Think about the amount of data and signals that we are gathering; we call it big data.

We not only do transactions in our personal life, we also have a lot of content that gets pushed at us. Our phone records, our location as we move, so we are wired and we are hyper-connected.

Dinesh Shahane

Shahane

Similar things are happening to businesses. Since we are so connected, a lot of data is created. Having all that big data – and it could be a problem from the privacy perspective — gives you an opportunity to harness that data, to optimize it and make your processes much more efficient, much more engaged.

If you think about dealing with big data, you try and find patterns in that data, instead of looking at just the raw data. Finding those patterns collectively as a discipline is called machine learning. There are various techniques, and you can find a regression pattern, or you can find a recommendation pattern — you can find all kinds of patterns that will optimize things, and make your experience a lot more engaging.

If you combine all these machine learning techniques with tools such as natural language processing (NLP), higher-level tools such as inference engines, and text-to-speech processing — you get things like Siri and Alexa. It was created for the consumer space, but the same thing could be available for your businesses, and you can train that for your business processes. Overall, these improve efficiency, give delight, and provide a very engaging user experience.

Gardner: Sanjay, from the network perspective it seems like we are able to take advantage of really advanced cloud services, put that into a user experience that could be conversational, like we do with our personal consumer devices.

What is it about the cloud services in the network, however, that are game-changers when it comes to applying AI and ML to just good old business processes?

Multiple intelligence recommended

Almeida: Building on Dinesh’s comment, we have a lot of intelligent devices in our homes. When we watch Netflix, there are a lot of recommendations that happen. We control devices through voice. When we get home the lights are on. There is a lot of intelligence built into our personal lives. And when we go to work, especially in an enterprise, the experience is far different. How do we make sure that your experience at home carries forward to when you are at work?

Sanjay Almeida

Almeida

From the enterprise and business networks perspective, we have a lot of data; a lot of business data about the purchases, the behaviors, the commodities. We can use that data to make the business processes a lot more efficient, using some of the models that Dinesh talked about.

How do we actually do a recommendation so that we move away from traditional search, and take action on rows and columns, and drive that through a voice interface? How do we bring that intelligence together, and recommend the next actions or the next business process? How do we use the data that we have and make it a more recommended-based interaction versus the traditional forms-based interaction?

Gardner: Sudhir, when we go out to the marketplace with these technologies, and people begin to use them for making better decisions, what will that bring to procurement and supply chain activities? Are we really talking about letting the machines make the decisions? Where does the best of what machines do and the best of what people do meet?

Bhojwani: Quite often I get this question, What will be the role of procurement in 2025? Are the machines going to be able to make all the decisions and we will have no role to play? You can say the same thing about all aspects of life, so why only procurement?

I think human intelligence is still here to stay. I believe, personally, it can be augmented. Let’s take a concrete example to see what it means. At SAP Ariba, we are working on a product called product sourcing. Essentially this product takes a bill of material (BOM), and

Sudhir Bhojwani

Bhojwani

it tells you the impact. So what is so cool about it?

One of our customers has a BOM, which is an eight-level deep tree with 10 million nodes in it. In this 10 million-node commodity tree, or BOM, a person is responsible for managing all the items. But how does he or she know what is the impact of a delay on the entire tree? How do you visualize that?I think humans are very poor at visualizing a 10-million node tree; machines are really good at it. Well, where the human is still going to be required is that eventually you have to make a decision. Are we comfortable that the machine alone makes a decision? Only time will tell. I continue to think that this kind of augmented intelligence is what we are looking for, not some machine making complete decisions on our behalf.

Gardner: Dinesh, in order to make this more than what we get in our personal consumer space, which in some cases is nice to have, it doesn’t really change the game. But we are looking for a higher productivity in business. The C-Suite is looking for increased margins; they are looking for big efficiencies. What is it from a business point of view that these technologies can bring? Is this going to be just a lipstick on a pig, so to speak, or do we really get to change how business productivity comes about?

Humans and machines working together

Shahane: I truly believe it will change the productivity. The whole intelligence advantage — if you look at it from a highest perspective like enhanced user experience — provides an ability to help you make your decisions.

When you make decisions having this augmented assistant helping you along the way — and at the same time dealing with large amount of data combined in a business benefit — I think it will make a huge impact.

Let me give you an example. Think about supplier risk. Today, at first you look at risk as the people on the network, and how you are directly doing business with them. You want to know everything about them, their profile, and you care about them being a good business partner to you.

But think about the second, third and fourth years, and some things become not so interesting for your business. All that information for those next years is not directly available on the network; that is distant. But if those signals can be captured and somehow surface in your decision-making, it can really reduce risk.

Reducing risk means more productivity, more benefits to your businesses. So that is one advantage I could see, but there will be a number of advantages. I think we’ll run out of time if we start talking about all of those.

Gardner: Sanjay, help us better understand. When we take these technologies and apply them to procurement, what does that mean for the procurement people themselves?

Almeida: There are two inputs that you need to make strategic decisions, and one is the data. You look at that data and you try to make sense out of it. As Sudhir mentioned, there is a limit to human beings in terms of how much data processing that they can do — and that’s where some of these technologies will help quite a bit to make better decisions.

The other part is personal biases, and eliminating personal biases by using the data. It will improve the accuracy of your strategic decisions. A combination of those two will help make better decisions, faster decisions, and procurement groups can focus on the right stuff, versus being busy with the day-to-day tasks.

Using these technologies, the data, and the power of the data from computational excellence — that’s taking the personal biases out of making decisions. That combination will really help them make better strategic decisions.

Bhojwani: Let me add something to what Sanjay said. One of the biggest things we’re seeing now in procurement, especially in enterprise software in general, is people’s expectations have clearly gone up based on their personal experience outside. I mean, 10 years back I could not have imagined that I would never go to a store to buy shoes. I thought, who buys shoes online? Now, I never go to stores. I don’t know when was the last time I bought shoes anywhere but online? It’s been few years, in fact. Now, think about that expectation on procurement software.

Currently procurement has been looked upon as a gatekeeper; they ensure that nobody does anything wrong. The problem with that approach is it is a “stick” model, there is no “carrot” behind it. What users want is, “Hey, show me the benefit and I will follow the rules.” We can’t punish the entire company because of a couple of bad apples.

By and large, most people want to follow the rules. They just don’t know what the rules are; they don’t have a platform that makes that decision-making easy, that enables them to get the job done sooner, faster, better. And that happens when the user experience is acceptable and where procurement is no longer looked down upon as a gatekeeper. That is the fundamental shift that has to happen, procurement has to start thinking about themselves as an enabler, not a gatekeeper. That’s the fundamental shift.

Gardner: Here at SAP Ariba LIVE 2017, we’re hearing about new products and services. Are there any of the new products and services that we could point to that say, aha, this is a harbinger of things to come?

In blockchain we trust

Shahane: The conversational interfaces and bots, they are a fairly easy technology for anyone to adopt nowadays, especially because some of these algorithms are available so easily. But — from my perspective — I think one of the technologies that will have a huge impact on our life will be advent of IoT devices, 3D printing, and blockchain.

To me, blockchain is themost exciting one. That will have huge impact on the way people look at the business network. Some people think about blockchain as a complementary idea to the network. Other people think that it is contradictory to the network. We believe it is complementary to the network.

Blockchain reaches out to the boundary of your network, to faraway places that we are not even connected to, and brings that into a governance model where all of your processes and all your transactions are captured in the central network.

I believe that a trusted transactional model combined with other innovations like IoT, where a machine could order by itself … My favorite example is when a washing machine starts working when the energy is cheaper … it’s a pretty exciting use-case.

This is a combination of open platforms and IoT combining with blockchain-based energy-rate brokering. These are the kind of use cases that will become possible in the future. I see a platform sitting in the center of all these innovations.

Gardner: Sanjay, let’s look at blockchain from your perspective. How do you see that ability of a distributed network authority fitting into business processes? Maybe people hadn’t quite put those two together.

Almeida: The core concept of blockchain is distributed trust and transparency. When we look at business networks, we obviously have the largest network in the world. We have more than 2.5 million buyers and suppliers transacting on the SAP Ariba Network — but there are hundreds of millions of others who are not on the network. Obviously we would like to get them.

If you use the blockchain technology to bring that trust together, it’s a federated trust model. Then our supply chain would be lot more efficient, a lot more trustworthy. It will improve the efficiency, and all the risk that’s associated with managing suppliers will be managed better by using that technology.

Gardner: So this isn’t a “maybe,” or an “if.” It’s “definitely,” blockchain will be a significant technology for advancing productivity in business processes and business platforms?

Almeida: Absolutely. And you have to have the scale of an SAP Ariba, have the scale from the number of suppliers, the amount of business that happens on the network. So you have to have a scale and technology together to make that happen. We want to be a center of a blockchain, we want to be a blockchain provider, and so that other third-party ecosystem partners can be part of this trusted network and make this process a lot more efficient.

Gardner: Sudhir, for those who are listening and reading this information and are interested in taking advantage of ML and better data, of what the IoT will bring, and AI where it makes sense — what in your estimation should they be doing now in order to prepare themselves as an organization to best take advantage of these? What would you advise them to be doing now in order to better take advantage of these technologies and the services that folks like SAP Ariba can provide so that they can stand out in their industry?

Bhojwani: That’s a very good question, and that’s one of our central themes. At the core of it, I fundamentally believe the tool cannot solve the problem completely on its own, you have to change as well. If the companies continue to want to stick to the old processes — but try to apply the new technology — it doesn’t solve the problem. We have seen that movie played before. People get our tool, they say, hey, we were sold very good visions, so we bought the SAP Ariba tool. We tried to implement it and it didn’t work for us.

When you question that, generally the answer is, we just tried to use the tool — tried to change the tool to fit our model, to fit our process. We didn’t try to change the processes. As for blockchain, enterprises are not used to being for track and trace, they are not really exposing that kind of information in any shape or form – or they are very secretive about it.

So for them to suddenly participate in this requires a change on their side. It requires seeing what is the benefit for me, what is the value that it offers me? Slowly but surely that value is starting to become very, very clear. You hear more companies — especially on the payment side — starting to participate in blockchain. A general ledger will be available on blockchain some day. This is one of the big ideas for SAP.

If you think about SAP, they run more general ledgers in the world than any other company. They are probably the biggest general ledger company that connects all of that. Those things are possible, but it’s still a technology only until the companies want to say, “Hey, this is the value … but I have to change myself as well.”

This changing yourself part, even though it sounds so simple, is what we are seeing in the consumer world. There, change happens a little bit faster than in the enterprise world. But, even that is actually changing, because of the demands that the end-user, the Millennials, when they come into the workforce; the force that they have and the expectations that they have. Enterprises, if they continue to resist, won’t be sustainable.

They will be forced to change. So I personally believe in next three to five years when there are more-and-more Millennials in the workforce, you will see people adopting blockchain and new ledgers at a much faster pace.

A change on both sides

Shahane: I think Sudhir put it very nicely. I think enterprises need to be open to change. You can achieve transformation if the value is clearly articulated. One of the big changes for procurement is you need to transition yourself from being a spend controller into a value creator. There is a lot of technology that will benefit you, and some of the technology vendors like us, we cannot just throw a major change at our users. We have to do it gradually. For example, with AI it will start as augmented first, before it starts making algorithmic decisions.

So it is a change on both sides, and once that happens — and once we trust each other on the system — nice things will happen.

Almeida: One thing I would add to that is organizations need to think about what they want to achieve in the future and adopt the tool and technology and business processes for their future business goals. It’s not about living in the past because the past is going to be gone. So how do you differentiate yourself, your business with the rest of the competition that you have?

The past business processes and people and technology many not necessarily get you over there. So how do you leverage the technology that companies like SAP and Ariba provide? Think about what should be your future business processes. The people that you will have, as Sudhir mentioned, the Millennials, they have different expectations and they won’t accept the status quo.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in application transformation, Ariba, artificial intelligence, big data, Cloud computing, data analysis, Enterprise transformation, ERP, machine learning, SAP, SAP Ariba, Spot buying, User experience | Tagged , , , , , , , , , , , | 1 Comment

Why effective IoT adoption is a team sport, and how to become a player

The next BriefingsDirect Voice of the Customer discussion highlights how Internet of things (IoT) adoption means more than just scaling-up networks. The complexity and novel architectural demands of IoT require a rethinking of the edge of nearly any enterprise.

We’ll explore here how implementing IoT strategies is not a one-size-fits-all endeavor — nor can it be bought off the shelf. What’s more, those new to the computational hive and analytical edge attributes of IoT are discovering that it takes a team approach.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

To explain how many disparate threads of an effective IoT fabric come together, we’re joined by Tushar Halgali, Senior Manager in the Technology Strategy and Architecture Practice at Deloitte Consulting in San Francisco, and Jeff Carlat, Senior Director of Technology Solutions at Hewlett Packard Enterprise (HPE) Strategic Alliances. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What the top trends making organizations recognize the importance of IoT?

Carlat: We’re at the cusp of a very large movement of digitizing entire value chains. Organizations have more and more things connected to the Internet. Look at your Nest thermostats and the sensors that are on everything. The connectivity of that back to the data center to do analytics in real-time is critical for businesses to reach the next level of efficiencies — and to maintain their competitiveness in the market.

Gardner: Tushar, this is a different type of network requirement set. We’re dealing with varied data types, speeds, and volumes in places that we haven’t seen before. What are the obstacles that organizations face as they look at their current infrastructure and the need to adapt?

Halgali: One of the really interesting things we’ve seen is that traditionally organizations have been solving technology-related problems as all information technology (IT)-related problems. There was this whole concept of machine to machine (M2M) a while back. It connected machines to the Internet, but it was a very small segment.

Now, we’re trying to get machines to connect to the Internet and have them communicate with each other. There are a lot of complexities involved. It’s not just the IT pieces, but having the operational technology (OT) connect to the IT world, too. It creates a very complex ecosystem of components.

Gardner: Let’s parse out the differences between OT in the IT. How do you describe those? Why should we understand and appreciate how different they are?

Jeff CarlatCarlat: When we think of OT, you think of long-standing companies out there, Bosch, National Instruments (NI), and many other companies that have been instrumenting sensors for operations, shop floors, oil and gas, and with every pump being sensed. The problem is that humans would have to interact a lot around those sensors, to remediate or to understand when something like a bearing on a pump has failed. [Learn more on OT and IoT.]

What’s key here is that IT, those core data-center technologies that HPE is leading the market in, has the ability of run analytics and to provide intelligence and insights from all of that sensor data. When you can connect the OT devices with the IT — whether in the data center or delivering that IT to the edge, which we call the Intelligent Edge — you can actually do your insights, create your feedback, and provide corrective actions even before things fail, rather than waiting.

Gardner: That failed ball bearing on some device isn’t just alerting the shop floor of a failure, it’s additionally automating a process where the inventory is checked. If it’s not there, the supply chain is checked, the order is put in place, it’s delivered and ready to install before any kind of breakdown — or did I oversimplify that?

End of Downtime

Carlat: That’s a fair representation. We’re working closely with a company called Flowserve. We’re building the telemetry within the pumps so that when a cavitation happens or a bearing is starting to wear out, it will predict the mean time for failure and alert them immediately. It’s all truly connected. It will tell you when it’s going to fail. It provides the access to fix it ahead of time, or as part of a scheduled maintenance plan, rather than during downtime, because downtime in an oil production facility or any business can cost millions of dollars.

Gardner: Tushar, are there any other examples you can think of to illustrate the power and importance of OT and IT together?

How to Gain Business Insights

From the Intelligent IoT Edge

Halgali: If our readers ever get a chance to check out one of the keynote speakers [at HPEDiscover London 2016] on the Intelligent Edge, there’s a good presentation by PTC ThingWorx software, which is an IoT platform and the HPE Edgeline servers in a manufacturing facility. You have conveyor belts that need certain improvements, they’re constantly producing things, and they’re part of the production facility. It’s all tied to the revenue of the organization, and the minute it shuts down, there are problems.

Tushar HalgaliMaintenance needs to be done on those machines, but you don’t want to do it too soon because you’re just spending money unnecessarily and it’s not efficient. You don’t want it too late, because then there’s downtime. So, you want to find the equilibrium between the two.

IoT determines the right window for when that maintenance needs to be done. If there’s a valve problem, and something goes down quickly, sensors track the data and we analyze the information. The minute that data goes off a certain baseline, it will tell you about this problem — and then it will say that there’s the potential in the future for a major problem.

It will actually generate a work order, which then feeds from the OT systems into the IT systems, and it’s all automatic. Then, when mechanics come in to try to solve these problems, they can use augmented reality or virtual reality to look at the machine and then fix the problem.

It’s actually a closed-loop ecosystem that would not have happened in the M2M base. It’s the next layer of maturity or advancement that IoT brings up.

Gardner: We can measure, we can analyze, and we can communicate. That gives us a lot of power. We can move toward minimum viable operations, where we’re not putting parts in place when they’re not needed, but we’re not going down either.

It reminds me of what happened on the financial side of businesses a decade or two ago, where you wanted to have spend management. You couldn’t do it until you knew where all your money was, where all the bills had to be paid, but then doing so, you could really manage things precisely. Those were back office apps, digital ledgers.

So, it’s a maturity coming to devices — analog, digital, what have you, and it’s pretty exciting. What’s the impact here financially, Jeff?

Carlat: Well, huge. Right now, IDC predicts IoT to represent about a $1.3 trillion opportunity by2020. It’s a huge opportunity, not only for incremental revenue for businesses, but increased efficiencies, reducing cost, reducing downtime, reducing risk; so, a tremendous benefit. Companies need to strongly consider a movement for digitizing the value chains to remain competitive in the future.

Bigger and Better Data at the Edge

Gardner: Okay. We understand why it’s important and we have a pretty good idea of what you need to do. Now, how do you get there? Is this big data at the edge? I heard a comment just the other day that there’s no bigger data than edge data and IoT data. We’re going to have to manage scales here that we haven’t seen before.

Carlat: It’s an excellent point. Jet engines that are being used today are generating 5 TB of data every time they land or take off. Imagine that for every plane, every engine that’s flying in the sky, every day, every year. The amount of data is huge. This brings me to the unique way that HPE is approaching this, and we truly believe we are leaders in the data center now and are leaders within IT.

We’re taking that design, that implementation, that knowledge, and we’re designing infrastructure, data center quality infrastructure, that’s put on the edge, ruggedized compute or analytics, and providing the ability to do that analysis, the machine learning, and doing it all locally, rather than sending all that data to the cloud for analytics. Imagine how expensive that would be.

That’s one approach we’re taking on within HPE. But, it’s not just about HPE tackling this. Customers are asking where to start. “This is overwhelming, this is complex. How do we do this?” We’re coming together to do advisory services, talking our customers through this, hand-holding, building a journey for them to do that digitization according to their plans and without disrupting their current environment.

Gardner: Tushar, when you have a small data center at the edge, you’re going to eke out some obvious efficiencies, but this portends unforeseen circumstances that could be very positive. What can you do when you have this level of analytics, and you take it to a macro level? You can begin to analyze things on an industry-level, and then have the opportunity to drill down and find new patterns, new associations, perhaps even new ways to design processes, factory floors, retail environments? What are we talking about in terms of the potential for the analytics when we capture and manage this larger amount of data?

Halgali: We’ve noted there are a lot of IoT use cases, and the value that generates so far has been around cost optimization, efficiencies, risk management, and those kinds of things. But by doing things on the edge, not only can you do all of those, you can start getting into the higher-value areas, such as revenue growth and innovation.

A classic example is remote monitoring. Think of yourself as a healthcare provider who would not be able to get into the business of managing people’s health if they’re all located remotely. If we have certain devices in homes through sensors and everything, you can start tracking their behaviors and their patterns. When they’re taking medicine and those kinds of things, and have all the information created through profiles of those people. You have now distributed the power of taking care of all the constituents in your base, without having to have them physically be in a hospital.

Gardner: Those feedback loops are not just one way where you can analyze, but you can start to apply the results, the benefits of the analysis, right back into the edge.

Carlat: Health and life sciences are great examples of using IoT as a way of more efficiently managing the hospital beds. It costs a lot of money to have people sit in a hospital when they don’t need to be there. To be able provide patient access remotely, to be able monitor them, to be able to intervene on an as-needed basis, drives much greater efficiencies.

We’ve talked a little bit about industrial IoT, we’ve talked a little bit about health and life sciences, but this extends into retail and smart stores, too. We’re doing a lot with Home Depot to deliver the store of the future, bridging the digital with the brick-and-mortar across 2,200 stores in North America.

It also has to do with the experience around campus and branch networks. At Levi’s Stadium in Santa Clara, California, HPE built that out with indoor Global Positioning System (GPS) and built out a mobile app that allows indoor wayfinding. It allows the patrons visiting the game to have a totally new, immersive experience.

They found uploads and downloads of photos, and they found hotspots by mapping out in the stadium. The hotspots had a great unobstructed view of the field, so there were a lot of people there taking pictures. They installed a food stand nearby and they have increased revenues because of strategic placement based on this IoT data. Levi’s Stadium recognized $1 million in additional revenue in the first season and 10 times the growth in the number of contacts that they have in their repository now.

Gardner: So, it’s safe to say that edge computing and intelligence is a gift that will keep giving, at levels organizations haven’t even conceived of yet.

Carlat: I believe it’s a necessity to stay competitive in the world of tomorrow.

How to Gain Business Insights

From the Intelligent IoT Edge

Gardner: If your competitor does this, and you don’t, that’s going to be a big question mark for your customers to mull over.

While we are still on the subject of the edge technical capabilities, by being able to analyze and not just pass along data, it seems to me it’s also a big help when it comes to compliance and security, which are big concerns.

Not only does security get mitigated by hardening or putting up a wall, probably the safest bet is to be able to analyze when something is breached or something is going wrong, and then to control or contain that. Tell me why the HPE Edgeline approach of analyzing data fast and on the edge can also be a big boost to security risk containment and other compliance issues.

Carlat: We do a lot around asset tracking. Typically, you need to send someone out there to remediate. By using Edgeline, using our sensor data, and using asset tagging, you can ensure that the right person can be identified as the service person physically at the pump to replace it, rather than just saying that they did it, writing on paper, and actually being off doing something else. You have security, you have the appropriate compliance levels with the right people fixing the right things in the right manner, and it’s all traceable and trackable.

Halgali: When you begin using edge devices, as well as geolocation services, you have this ability to do fine-grained policy management and access control for not just the people, but also devices. The surface area for IoT is so huge there are many ad-hoc points into the network. By having a security layer, you can control that and edge devices certainly help with that.

A classic example would be if you have a camera in a certain place. The camera is taking constant feeds of things that are going on that are wrong or right; it’s constantly recording the data. But the algorithms that have been fed into the edge device allow it to capture things that are normal, so it can not only alert authorities at the right time, but also store feed only for that. Why store days and day’s worth of images, when you can pick only the ones that truly matter?

As Jeff said, it allows workplace restrictions and compliance, but also in an open area, it allows you to track events that are specific.

In other cases, let’s say the mining industry or the oil and gas industry, where you have workers that are going to be in very remote locations and it’s very hard to track each one of them. When you have the ability to track the assets over time, if things go wrong, then it’s easier to intervene and help out.

Carlat: Here is a great personal example. I went to my auto dealership and I pulled into the garage. Immediately, I was greeted at my door by name, “Hello Mr. Carlat. Are you in for your service?”

I thought, “How do you know I came in? Are you tracking me? How are you doing that?” It turns out, they have radio-frequency identification (RFID) tags. When you come in for service, they apply these tags. As soon as you pull in, they provide a personalized experience.

Also, it yields a net benefit of location tracking. They know exactly where my car is at all stages. If I moved to a rental car that they have there, my profile is automatically transferred over there. It starts their cycle time metrics, too, the traceability of how they’re doing on remediating whatever my service level may be. It’s a whole new experience. I’m now a lifetime-loyal customer of this auto dealer because of that personalization; it’s all coming from implementation of IoT.

Gardner: The implications are vast; whether it’s user experience, operational efficiency, risk reduction, or insights and analysis at different levels of an industry and even a company.

It’s very impressive stuff, when you can measure everything and you can gather as much data as you want and then you can triage, and analyze that data and put the metadata out to the cloud; so much is possible.

We’ve established why this is of interest. Now, let’s think a little bit about how you get there for organizations that are thinking more about re-architecting their edge in order to avail themselves of some of these benefits. What is it about the HPE and Deloitte alliance that allows for a pathway to get on board and start doing things in a proper order to make this happen in the best fashion?

Transformation Journey, One Step at a Time

Halgali: Dana, anytime you do an IoT initiative, the key thing to realize that it should be part of a major digital transformation initiative. Like any other transformation story, there are the people, process, and the technology components of it. Jeff and I can talk about these three at a very high level when you begin talking about the process and the business model.

Deloitte has a huge practice in the strategy and the process space. What we’re looking at is digital industrial value-chain transformation. Let’s look at something like a smart factory.

What’s the value chain for an organization that’s making heavy machinery, end-to-end, all the way from R and D and planning, to procurement and development and shipment, and after-sale repairs, the entire value chain? What does that look like in the new IoT era? Then, decompose that into processes and use cases, and then identify which are the most high-value use cases, quantifying them, because that’s important.

Identifying the use cases that will deliver immediate tangible value in the near term provides the map of where to begin the IoT journey. If you can’t quantify concrete ROI, then what’s the point of investing? That addresses the reason of what IoT can do for the organization and why to leverage this capability. And then, it’s about helping our clients build out the business cases, so that they can justify the investments needed from the shareholders and the board — and can start implementing.

At a very high level, what’s the transformation story? What’s the impact on the business model for the organization? Once you have those strategy questions answered, then you get into the tactical aspects, which is how we execute on it.

From an execution standpoint, let’s look at enablement via technology. Once you have identified which use-cases to implement, you can utilize the pre-integrated, pre-configured IoT offerings that Deloitte and HPE have co-developed. These offerings address use cases such as asset monitoring and maintenance (in oil and gas, manufacturing, and smart cities), and intelligent spaces (in public venues such as malls, retail stores, and stadiums), and digital workplaces (in office buildings). One must also factor in organization, change and communication management as addressing cultural shifts as one of the most challenging aspects of an IoT-enabled digital transformation. Such a holistic approach helps our clients to think big, start small, and scale fast.

Gardner: Tushar just outlined a very nice on-ramp process. What about some places to go for information or calls for action? Where should people get started as they learn how to implement on the process that Tushar just described?

How to Gain Business Insights

From the Intelligent IoT Edge

Carlat: We’re working as one with Deloitte to deliver these transformations. Customers with interest can come to either Deloitte or HPE. We at HPE have a strong group of technology services consultants who can step in and help in partnership with Deloitte as well.

So, come to either company. Any of our partner representatives can get all of this and our websites are loaded with information. We’re here to help. We’re here to hold the hand and lead our customers to digitize and achieve these promised efficiencies that can be garnered from digital value chains.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: HewlettPackard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, Mobile apps, mobile computing, Platform 3.0 | Tagged , , , , , , , , , , | 1 Comment

TasmaNet ups its cloud game to deliver a regional digital services provider solution

The next BriefingsDirect Voice of the Customer cloud adoption patterns discussion explores how integration of the latest cloud tools and methods help smooth out the difficult task of creating and maintaining cloud-infrastructure services contracts.

The results are more flexible digital services that both save cloud consumers money and provide the proper service levels and performance characteristics for each unique enterprise and small business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Stay with us now as we hear from a Tasmanian digital services provider, TasmaNet, about their solution-level approach to cloud services attainment, especially from mid-market enterprises. To share how proper cloud procurement leads to new digital business innovations, we’re joined by Joel Harris, Managing Director of TasmaNet in Hobart, Tasmania. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start at a high level, looking at the trends that are driving how cloud services are affecting how procurement is going to be done in 2017. What has changed, in your opinion, in how enterprises are reacting to and leveraging the cloud services nowadays? 

Harris: We’re seeing a real shift in markets, particularly with the small- and medium-sized businesses (SMBs) in their approach and adoption of cloud services. More and more, there is an acceptance that it’s okay to buy products off the Internet. We see it  every day within personal cloud, iPhones, the Apple Store, and Google Play to buy movies. So, there is now the idea in the workplace that it’s acceptable to procure business services online through cloud providers. 

Because of the success of personal cloud with companies such as Apple, there’s a carry-over in that there is an assumed equivalent success in the commercial sense, and unfortunately, that can cause some problems. What we’re seeing is a willingness to start procuring from public, and also some private cloud as well, which is really good. What we’re finding, though, is a lack of awareness about what it means for businesses to buy from a cloud provider.

Gardner: What is it that the people might have wrong? What is it that they’ve not seen in terms of where the real basis for value comes when you create a proper cloud relationship? 

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Harris: Look at the way personal cloud is procured, a simple click, a simple install, and you have the application. If you don’t like it, you can delete it. 

When you come into a commercial environment, it’s not that simple, although there can a perception that it is. When you’re looking at your application, the glossy picture, it may talk about functionality, business improvement, future savings, and things like that. But when you come to the implementation of a cloud product or a cloud service into a business, the business needs to make sure that it has met its service levels, from internal business requirements or external business requirements, and from customers and markets. 

Harris

But you also need to make sure that it has also married up the skills of your workforce. Cloud services are really just a tool for a business to achieve an outcome. So, you’re either arming someone in the workforce with the tool and skills to achieve an outcome or you’re going to use a service from a third-party to achieve an outcome. 

Because we’re still very early in the days of cloud being adopted by SMBs, the amount of work being put into the marrying up of the capabilities of a product, or the imagined capabilities of a product, for future benefits to internal business processes and systems is clearly not as mature as we would like. Certainly, if you look into the marketplace, the availability of partners and skills to help companies with this is also lacking at the moment. 

Cloud Costs

Then, comes the last part that we talked about, which is removing or changing the application. At the moment, a lot of SMBs are still using traditional procurement. Maybe they want a white car. Well, in cloud services there’s always the ability to change the color, but it does come at a cost. There’s traditionally a variation fee or similar charge.

SMBs are getting themselves in a bit of trouble when they say they would like a white car with four seats, and then, later on, find that they actually needed five seats and a utility. How do they go about changing that? 

The cost of change is something that sometimes gets forgotten in those scenarios. Our experience over the last two years is companies overlooking the cost of change when under a cloud-services contract. 

Gardner: I’ve also heard you say, Joel, that cloud isn’t for everyone, what do you mean by that? How would a company know whether cloud is the right fit for it or not?

Harris: Simply look for real, deep understanding of your business. Coming back to the ability to link up service levels, it’s the ability to have a clear view into the future of what a company needs to achieve its outcomes. If you can’t answer those questions for your customer, or the customer can’t answer the questions for you as a cloud provider, then I would advise you to take a step back and really start a new process of understanding what it is the customer wants out of the cloud product. 

Change later on can cost, and small businesses don’t have an amount of money to go in there and continue funding a third party to change the implementation of what, in most cases, becomes a core piece of software in an organization.

Gardner: For the organizations that you work with that are exploring deeper relationships to private cloud, do you find that they’re thinking of the future direction as well or thinking of the strategy that they’d like to go hybrid and ultimately perhaps more public cloud? Is that the common view for those that are ready for cloud now?

Harris: In the enterprise, yes. We’re definitely seeing a huge push by organizations that understand the breakdown of applications between suitable for private cloud and suitable for public cloud. 

As you come down into the SMB market, that line blurs a little bit. We have some companies that wish to put everything in the cloud because it’s easy and that’s the advice they were given. Or, you have people who think they have everything in the cloud, but it’s really a systems integrator that has now taken their servers, put them in a data center, and is managing them as more of a hosted, managed solution. 

Unfortunately, what we are seeing is that a lot of companies don’t know the difference between moving into the cloud and having a systems integrator manage their hardware for them in a data center where they don’t see it.

There’s definitely a large appetite for moving to the as-a-service model in companies that have a C-suite or some level of senior management with ownership of business process. So, if there is a Chief Information Officer (CIO) or a Chief Technology Officer (CTO) or some sort of very senior Information Technology (IT) person that has a business focus on the use of technology, we’re seeing a very strong review of what the company does and why and how things should be moved to either hybrid or 100 percent in either direction.

Gardner: So, clearly the choices you make around cloud affect the choices you make as a business; there really is a transformational aspect to this. Therefore, the contract, that decision document of how you proceed with your cloud relationship, is not just an IT document; it’s really a business document. Tell us why getting the contract right is so important.

Harris: It’s very, very important to involve all the areas of a business when going into a cloud services contract.

Ecosystems of Scale

Gardner: And it’s no longer really one relationship. That is to say that a contract isn’t often just between one party and another. As we’re finding out, this is an ecosystem, a team sport, if you will. How does the contract incorporate the need for an ecosystem and how does TasmaNet help solve that problem of relationship among multiple parties?

Harris: Traditionally, if we look at the procurement department of a company, the procurement department would draft a tender, negotiate a contract between the supplier and the company, and then services would begin to flow, or whatever product was purchased would be delivered. 

More and more, though, in the cloud services contract, the procurement department has little knowledge of the value of the information or the transaction that’s happening between the company and the supplier, and that can be quite dangerous. Even though cloud can be seen as a commodity item, the value of the services that come over the top is very much not a commodity item. It’s actually a high-value item that, in most cases, is something relevant to keeping the company operating.

What we found at TasmaNet was that a lot of the companies moving to cloud don’t have the tools to manage the contract. They’re familiar with traditional procurement arrangements, but in managing a services contract or a cloud services contract, if we want to focus on what TasmaNet provides, you need to know a number of different aspects. 

We created an ecosystem and we said that we were going to create this ecosystem with all of the tools required for our customers. We put in a portal, so that the finance manager can look at the financial performance of the services. Does it meet budget expectations, is it behaving correctly, are we achieving the business outcomes for the dollars that we said it was going to cost?

Then, on the other side, we have a different portal that’s more for the technology administrator about ensuring that the system is performing within the service-level agreements (SLAs) that have been documented either between the company and the service provider or the IT department and the big internal business units. 

It’s important to understand there are probably going to be multiple service levels here, not only between the service provider and the customer, but also the customer and their internal customers. So, it’s important to make sure that they’re managed all the way through. 

We provide a platform so that people can monitor end to end from the customers using, all the way through to the financial manager on the other side.

Gardner: We’ve seen the importance of the contract. We understand that this is a complex transaction that can involve multiple players. But I think there is also another shift when we move from a traditional IT environment to a cloud environment and then ultimately to a hybrid cloud environment, and that’s around skills. What are you seeing that might be some dissonance between what was the skill set before and what we can expect the new skill set for cloud computing success to be?

Sea Change

Harris: We are seeing a huge change, and sometimes this change is very difficult for the people involved. We see that with cloud services coming along, the nature of the tool is changing. A lot of people traditionally have been trained in a single skill set, such as storage or virtualization. Once you start to bring in cloud services, you’re actually bundling a bunch of individual tools and infrastructure together to become one, and all of a sudden, that worker or that individual now has a tool that is made up of an ecosystem of tools. Therefore, their understanding of those different tools and how they report on it and the related elements change.

We see a change from people doing to controlling. We might see a lot of planning to try to avoid events, rather than responding to them. It really does change the ecosystem in your workforce, and it’s probably one of the biggest areas where we see risk arise when people are moving to a cloud-services contract.

Gardner: Is there something also in the realm of digital services, rather than just technology, that larger category of digital services, business-focused outcomes? Is that another thing that we need to take into consideration as organizations are thinking about the right way to transform to be of, for, and by the cloud?

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Harris: It comes back to a business understanding. It’s being able to put a circle around something that’s a process or something we could buy from someone else. We know how important it is to the company, we know what it costs the company, and we know the service levels needed around that particular function. Therefore, we can put it out to the market to evaluate. Should we be looking to buy this as a digital service, should we be looking to outsource the process, or should we be looking to have it internally on our own infrastructure and continue running it?

Those questions and the fact-finding that goes into that at the moment is one of the most important things I encourage a customer looking at cloud services to spend a lot of time on. It’s actually one of the key reasons why we have such a strong partnership at Hewlett Packard Enterprise (HPE). The hardware and infrastructure is so strong and good, the skill set and the programs that we can access to work with our customers to pull out information and put it up into things like enterprise nets to understand what the landscape looks like in a customer is just as important as the infrastructure itself.

Gardner: So, the customer needs to know themselves and see how they fit into these new patterns of business, but as you are a technologist, you also have to have a great deal of visibility into what’s going on within your systems, whether they’re on your premises, or within a public-private cloud continuum of some kind. Tell me about the TasmaNet approach and how you’re using HPE products and solutions to gain that visibility to know yourself even as you are transforming.

Harris: Sure, so a couple of the functions that we use with HPE … They have a very good [cloud workload suitability] capability set called HPE Aura with which they can sit down with us and work through the total cost of ownership for an organization. That’s not just at an IT level, but it’s for almost anything, to look at the work with the accounting team, to look at the total cost, from the electricity, through to dealing with resources, the third party contractors in construction teams. That gives us a very good baseline and understanding of how much it costs today, which is really important for people to understand. 

Then, we also have other capabilities. We work with HPE to model data about the what-if. It’s very important to have that capability when working with a third-party on understanding whether or not you should move to cloud. 

Gardner: Your comments, Joel, bring me back to a better understanding of why a static cloud services contract really might be a shackle on your ability to innovate. So how do you recognize that you need to know what you don’t know going into cloud, and therefore put in place the ability to react in some sort of a short-term basis iterate, what kind of contract allows for that dynamic ability to change? How do you begin to think about a contract that is not static?

Harris: We don’t know the answer yet. We’re doing a lot of work with our current customers and with HPE to look at that. Some of the early options we are looking at is that when we create a master services agreement with a company, even for something that may be considered a commodity, we ensure that we put in a great plan around innovation, risk management framework side, and continuous service improvement. Then there’s a conduit for information to flow between the two parties around business information, which can then feed into the use of the services that we provided.

I think we still have a long way to go, because there’s a certain maturity required. We’re essentially becoming a part of another company, and that’s difficult for people to swallow, even though they accept using a cloud services contract. We’re essentially saying, “Can we have a key to your data center, or the physical front door of your office?”

If that’s disconcerting for someone, well, it should be equally disconcerting that they’re moving to cloud, because we need access to those physical environments, the people face-to-face, the business plan, the innovation plan, and to how they manage risk in order to ensure that there is a successful adoption of cloud not just today, but also going forward.

Gardner: Clearly, the destiny of you and your clients is tied closely together. You need to make them successful, they need to let you show them the tools and the new flexible nature and you need to then rely on HPE to give you the means to create those dashboards and have that visibility. It really is a different kind of relationship, co-dependence, you might say.

Harris: The strength that TasmaNet will have going forward is the fact that we’re operating under a decentralized model. We work with HPE, so that we can have a workforce on the ground closer to the customer. The model of having all of your cloud services in one location, a thousand kilometers away from the customer, while technically capable, we don’t believe is the right mix in client-supplier relationships. We need to make sure that physically there are people on the ground to work hand-in-hand with the business management and others to ensure that we have a successful outcome. 

That’s one of the strong key parts to the relationship between HPE and TasmaNet. TasmaNet is now a certified services provider with HPE, which lets us use their workforce anywhere around Australia and work with companies that want to utilize TasmaNet services.

Gardner: Help our readers and listeners understand that your regional reach is primarily in Tasmania but you’re also in Australia and you have some designs and plans for an even  larger expansion. Tell us about your roadmap?

No Net is an Island – Tasmania and Beyond

Harris: Over the last few years, we’ve really been spending time gathering information from a couple of early contracts to understand the relationship between a cloud provider and a customer. In the last six months, we put that into a product that we actually call  TasmaNet Core, which is our new system for delivering digital services.

During the next 18 months we are working with some large contracts that we have won down here in Tasmania, having just signed one for the state government. We certainly have a number of opportunities and pathways to start deploying services and working with the state government on how cloud can deliver better business outcomes for them. We need to make sure we really understand and document clearly how we achieve success here in Tasmania.

Then, our plan is, as a company, to push this out to the national level. There are a lot of regional places throughout Australia that require cloud services, and more and more companies like TasmaNet will move into those regional areas. We think it’s important that they aren’t forgotten and we also think that for any business that can be developed in Tasmania and operate successfully, there is no reason why it can’t be replicated to regional areas around Asia-Pacific as required.

Gardner: Joel, let’s step back a moment and look at how to show, rather than tell, what we mean, in the new era of cloud, by a proper cloud adoption. Do you have any examples, either named or generic, where we can look at how this unfolded and what the business  benefits have been when it’s done well?

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Harris: One of our customers, about three years ago, moved into a cloud services environment, which was very successful for the company. But what we found was that some of the contracts with their software services, while they enabled them to move into a cloud provider, added a level of complexity that make the platform very difficult to manage ongoing. 

Over a number of years, we worked with them to remove that key application from the cloud environment. It’s really important that, as a cloud provider, we understand what’s right for the customer. At the end of the day, if there’s something that’s not working for the customer, we must work with them to get results.

It worked out successfully. We have a very strong relationship with the company. There’s a local company down here called TT-Line, which operates some boat vessels for shipping between Tasmania and Mainland Australia, and because of the platform, we had to find the right mix. That’s really important and I know HPE uses it as a catch phrase. 

This is a real-world example of where it’s important to find the right mix between putting your workloads in the appropriate place. It has to work both ways. It’s easy to come in to a cloud provider. We need to make sure it’s also easy to step back out as well, if it doesn’t work.

Now, we’re working with that company to deeply understand the rest of the business to see what are the workloads that can come out of TasmaNet, and what are the workloads that need to even move internally or actually move to an application-specific hosting environment?

Gardner: Before we close out, Joel, I’d like to look a bit to the future. We spoke earlier about how private cloud and adjusting your business appropriately to the hosting models that we’ve described is a huge step, but of course, the continuum is beyond that. It goes to hybrid. There are public cloud options, data placement, and privacy concerns that people are adjusting to in terms of location of data, jurisdictions, and so forth. Tell me about where you see it going and how an organization like yours adjusts to companies as they start to further explore that hybrid-cloud continuum?

Hybrid Offspring

Harris: Going forward, the network will play probably one of the biggest roles in cloud services in the coming 10 years. More and more, we’re seeing software-defined network suppliers come into the marketplace. In Australia, we have a large data center, NEXTDC, which started up their own network to connect all of the data centers. We have Megaport, which is 100 percent software-defined, where you can buy a capacity for up to one hour or long term. As these types of networks become common, it enables more and more the fluid movement of the services on top.

When we start to cross over two of the other really big things happening, which are the Internet of Things (IoT) and 5G, you have, all of a sudden, this connectivity that means data services can be delivered anywhere and that means cloud services can be delivered anywhere.

More and more, you’re going to see the collection of data lakes, the collection of information even by small businesses that understand that they want to keep all the information, and analyze it. As they go to cloud service providers, they will demand these data services there, too, and the analysis capabilities will become very, very powerful.

In the short term, the network is going to be the key enabler for things such as IoT, which will then flow on to support a distributed model for cloud providers in the next 10 years, whereas traditionally we are seeing them centralized into key larger cities. That will change over in the coming years, because there is just too much data to centralize as people start gathering all of this information.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, managed services, professional services | Tagged , , , , , , , , | Leave a comment

Logicalis chief technologist defines the new ideology of hybrid IT

The next BriefingsDirect thought leader interview explores how digital disruption demands that businesses develop a new ideology of hybrid IT.

We’ll hear how such trends as Internet of things (IoT), distributed IT, data sovereignty requirements, and pervasive security concerns are combining to challenge how IT operates. And we’ll learn how IT organizations are shifting to become strategists and internal service providers, and how that supports adoption of hybrid IT. We will also delve into how converged and hyper-converged infrastructures (HCI) provide an on-ramp to hybrid cloud strategies and adoption. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. 

To help us define a new ideology for hybrid IT, we’re joined by Neil Thurston, Chief Technologist for the Hybrid IT Practice at Logicalis Group in the UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why don’t we start at this notion of a new ideology? What’s wrong with the old ideology of IT?

Thurston: Good question. What we are facing now is what we’ve done for an awfully long time versus what the emerging large hyper-scale providers with cloud, for example, have been developing. 

Thurston

The two clashing ideologies that we have are: Either we continue with the technologies that we’ve been developing (and the skills and processes that we’ve developed in-house) and push those out to the cloud, or we adopt the alternative ideology. If we think about things such as Microsoft Azure and the forthcoming Azure Stack, which means that those technologies are pulled from the cloud into our on-premise environments. The two opposing ideologies we have are: Do we push out or do we pull in?

The technologies allow us to operate in a true hybrid environment. By that we mean not having isolated islands of innovation anymore. It’s not just standing things up in hybrid hyper-scale environments, or clouds, where you have specific skills, resources, teams and tools to manage those things. Moving forward, we want to have consistency in operations, security, and automation. We want to have a single toolset or control plane that we can put across all of our workloads and data, regardless of where they happen to reside.

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Gardner: One of the things I encounter, Neil, when I talk to Chief information officers (CIO)s, is their concern that as we move to a hybrid environment, they’re going to be left with having the responsibility — but without the authority to control those different elements. Is there some truth to that?

Thurston: I can certainly see where that viewpoint comes from. A lot of our own customers reflect that viewpoint. We’re seeing a lot of organizations, where they may have dabbled and cherry-picked from service management and from practices such as ITIL. We’re now seeing more pragmatic IT service management (ITSM) frameworks, such as IT4IT, coming to the fore. These are really more about pushing that responsibility level up the stack. 

You’re right in that people are becoming more of a supply-chain manager than the actual manager of the hardware, facilities, and everything else within IT. There definitely is a shift toward that, but there are also frameworks coming into play that allow you to deal with that as well. 

Gardner: The notion of shadow IT becoming distributed IT was once a very dangerous and worrisome thing. Now, it has to be embraced and perhaps is positive. Why should we view it as positive?

Out of the shadow

Thurston: The term shadow IT is controversial. Within our organization, we prefer to say that the shadow IT users are the digital users of the business. You have traditional IT users, but you also have digital users. I don’t really think it’s a shadow IT thing; it’s that they’re a totally different use-case for service consumption. 

But you’re right. They definitely need to be serviced by the organizations. They deserve to have the same level of services applied, the same governance, security, and everything else applied to them. 

Gardner: It seems that the new ideology of hybrid IT is about getting the right mix and keeping that mix of elements under some sort of control. Maybe it’s simply on the basis of management, or an automation framework of some sort, but you allow that to evolve and see what happens. We don’t know what this is going to be like in five years. 

Thurston: There are two pieces of the puzzle. There’s the workload, the actual applications and services, and then there’s the data. There is more importance placed on the data. Data is the new commodity, the new cash, in our industry. Data is the thing you want to protect. 

The actual workload and service consumption piece is the commodity piece that could be worked out. What you have to do moving forward is protect your data, but you can take more of a brokering approach to the actual workloads. If you can reach that abstraction, then you’re fit-for-purpose and moving forward into the hybrid IT world.

Gardner: It’s almost like we’re controlling the meta-processes over that abstraction without necessarily having full control of what goes on at those lower abstractions, but that might not be a bad thing. 

Thurston: I have a very quick use-case. A customer of ours for the last five years has been using Amazon Web Services (AWS), and they were getting the feeling they were getting tied into the platform. Their developers over the years had been using more and more of the platform services and they weren’t able to make all that code portable and take it elsewhere. 

This year, they made the transformation and they’ve decided to develop against Cloud Foundry, an open Platform as a Service (PaaS). They have instances of Cloud Foundry across Pivotal on AWS, also across IBM Bluemix, and across other cloud providers. So, they’re now coding once — and deploying anywhere for the compute workload side. Then, they have a separate data fabric that regulates the data underneath. There are emerging new architectures that help you to deal with this.

Gardner: It’s interesting that you just described an ecosystem approach. You’re no longer seeing as many organizations that are supplier “XYZ” shops, where 80 or 90 percent of everything would be one brand name. You just described a highly heterogeneous environment. 

Thurston: People have used cloud services, and hyper-scale of cloud services, and have specific use-cases, typically the more temporary types of workloads. Even companies born in the cloud, such as Uber and Netflix, reach those inflection points, where actually going to on-premise was far cheaper. It made compliance to regulations far easier. People are slowly realizing, through what other people are doing — and also from their own good or bad experiences — that hybrid IT really is the way forward.

Gardner: And the good news is that if you do bring it back from the cloud or re-factor what you’re doing on-premises, there are some fantastic new infrastructure technologies. We are talking about converged infrastructure, hyper-converged infrastructure, software-defined data center (SDDC). At recent HPE Discover events, we’ve seen more  memory-driven computing, and we’re seeing some interesting new powerful speeds and feeds along those lines. 

So, on the economics and the price-performance equation, the public cloud is good for certain things, but there’s some great attraction to some of these new technologies on-premises. Is that the mix that you are trying to help your clients factor?

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Thurston: Absolutely. We’re pretty much in parallel with the way that HPE approaches things, with the right mix. We see that in certain industries there’s always going to be things like regulated data. Regulated data is really hard to control in a public-cloud space, where you have no real idea where things are. You can’t easily order them physically. 

Having on-premise provides you with that far easier route to regulation, and today’s technologies, the hyper-converged platforms, for example, allow us to really condense the footprint. We don’t need these massive data centers anymore.

We’re working with customers where we have taken 10 or 12 racks worth of legacy classic equipment and with a new hyper-converged, we put in less than two racks worth of equipment. So, the actual operational footprint of facilities cost is much less. It makes it a far more compelling argument for those types of use-cases than using public cloud.

Gardner: Then you can mirror that small footprint data center into a geography, if you need it for compliance requirements, or you could mirror it for reasons of business continuity and backup and recovery. So, there are lots of very interesting choices. 

Neil, tell us a little bit about Logicalis. I want to make sure all of our listeners and readers understand who you are and how you fit into helping organizations make these very large strategic decisions.

Cloud-first is not cloud-only 

Thurston: Logicalis is essentially a digital business enabler. We take technologies across multiple areas and help our customers become digital-ready. We cover a whole breadth of technologies. 

I look at the hybrid IT practice, but we also have the more digital-focused parts of our business, such as collaboration and analytics. The hybrid IT side is where we’re working with our customers through the pains that they have, through the decisions that they have to make, and very often board-level decisions are made where you have to have a “cloud-first” strategy.

It’s unfortunate when that gets interpreted as “cloud-only.” There is some process to go through for cloud readiness, because some applications are not going to be fit for the cloud. Some cannot be virtualized; most can, but there are always regulations. Certainly, in Europe at present there is a lot of fear, uncertainty, and doubt (FUD) in the market, and there is a lot of uncertainty around European Union General Data Protection Regulation (EU GDPR), for example, and overall data protection.

There are a lot of reasons why we have to take a bit more of a factored, measured approach to looking at where workloads and data are best placed moving forward, and the models are that you want to operate in.

Gardner: I think HPE agrees with you. Their strategy is to put more emphasis on things like high performance computing (HPC), the workloads of which won’t likely be virtualized, that won’t work well in a public cloud, one-size-fits-all environment. It’s also factoring in the importance of the edge, even thinking about putting the equivalent of a data center on the edge for demands around information for IoT, and analytics and data requirements there as well as the compute requirements.

What’s the relationship between HPE and Logicalis? How do you operate as an alliance or as a partnership?

Thurston: We have a very strong partnership. We have a 15- or 16-year relationship with HPE in the UK. As everyone else did, we started out selling service and storage, but we’ve taken the journey with HPE and with our customers. The great thing about HPE is that they’ve always managed to innovate, they have always managed to keep up with the curve, and that’s really enabled us to work with our customers and decide what the right technologies are. Today, this allows us to work out the right mix for our customers of on-premise and off-premise equipment,

HPE is ahead of the curve in various technologies in our area, and one of those includes HPE Synergy. We’re now talking with a lot of our customers about the next curve that’s coming with infrastructure-as-code, and how we can leverage what the possible benefits and outcomes will be of enabling that technology.

The on-ramp to that is that we’re using hyper-converged technologies to virtualize all the workloads and make them portable, so that we can then abstract them and place them either within platform services or within cloud platforms, as necessary, as dictated by whatever our security policies dictate.

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Gardner: Getting back to this ideology of hybrid IT, when you have disparate workloads and you’re taking advantage of these benefits of platform choice, location, model and so forth, it seems that we’re still confronted with that issue of having the responsibility without the authority. Is there an approach that HPE is taking with management, perhaps thinking about HPE OneView that is anticipating that need and maybe adding some value there?

Thurston: With the HPE toolsets, we’re able to set things such as policies. Today, we’re at Platform 2.5 really, and the inflection that takes us on to the third platform is the policy automation. This is one part that HPE OneView allows us to do across the board. 

It’s policies on our storage resources, policies on our compute resources, and again, policies on non-technology, so quotas on public cloud, and those types of things. It enables us to leverage the software-defined infrastructure that we have underneath to set the policies that define the operational windows that we want our infrastructure to work in, the decisions it’s allowed to make itself within that, and we’ll just let it go. We really want to take IT from “high touch” to “low touch,” that we can do today with policy, and potentially, in the future with infrastructure as code, to “no touch.” 

Gardner: As you say, we are at Platform 2.5, heading rapidly towards Platform 3. Do you have some examples you can point to, customers of yours and HPE’s, and describe how a hybrid IT environment translates into enablement and business benefits and perhaps even economic benefits? 

Time is money

Thurston: The University of Wolverhampton is one of our customers, where we’ve taken this journey with them with HPE, with hyper-converged platforms, and created a hybrid environment for them. 

Today, the hybrid environment means that we’re wholly virtualized on HPE hyper-converged platform. We’ve rolled the solutions out across their campus. Where we normally would have had disparate clouds, we now have a single plane controlled by OneView that enables them to balance all the workloads across the whole campus, all of their departments. It’s bringing them new capabilities, such as agility, so they can now react a lot quicker. 

Before, a lot of the departments were coming to them with requirements, but those requirements were taking 12 to 16 weeks to actually fulfill. Now, we can do these things from the technology perspective within hours, and the whole process within days. We’re talking a factor of 10 here in reduction of time to actually produce services. 

As they say, success breeds success. Once someone sees what the other department is able to do, that generates more questions, more requests, and it becomes a self-fulfilling prophecy. 

We’re working with them to enable the next phase of this project. That is to leverage the hyper-scale of public clouds, but again, in a more controlled environment. Today, they’re used to the platform. That’s all embedded in. They are reaping the benefits of that from mainly an agility perspective. From an operational perspective, they are reaping the benefits of vastly reduced system, and more importantly, storage administration. 

Storage administrations have had 85 percent savings on their time required to administer the storage by having it wholly virtualized, which is fantastic from their perspective. It means they can concentrate more on developing the next phase, which is embracing or taking this ideology out to the public cloud.

Gardner: Let’s look to the future before we wrap this up. What would you like to see, not necessarily from HPE, but what can the vendors, the suppliers, or the public-cloud providers do to help you make that hybrid IT equation work better? 

Thurston: A lot of our mainstream customers always think that they’re late into adoption, but typically, they’re late into adoption because they’re waiting to see what becomes either a de-facto standard that is winning in the market, or they’re looking for bodies to create standards. Interoperability between platforms and standards is really the key to driving better adoption.

Today with AWS, Azure, etc., there’s no real compatibility that we can take from them. We can only abstract things further up. This is why I think platform as a service, things like Cloud Foundry and open platforms will, for those forward thinkers who want to adopt the hybrid IT, become the future platforms of choice.

Gardner: It sounds like what you are asking for is a multi-cloud set of options that actually works and is attainable. 

Thurston: It’s like networking, with Ethernet. We have had a standard, everyone adheres to it, and it’s a commodity. Everyone says public cloud is a commodity. It is, but unfortunately what we don’t have is the interoperability of the other standards, such as we find in networking. That’s what we need to drive better adoption, moving forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, enterprise architecture, Hewlett Packard Enterprise, ITSM, Platform 3.0, Software-defined storage | Tagged , , , , , , , , | Leave a comment

Converged IoT systems: Bringing the data center to the edge of everything

The next BriefingsDirect thought leadership panel discussion explores the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support Internet of Things (IoT) requirements.

The demands of data processing, real-time analytics, and platform efficiency at the intercept of IoT and business benefits have forced new technology approaches. We’ll now learn how converged systems and high-performance data analysis platforms are bringing the data center to the operational technology (OT) edge.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To hear more about the latest capabilities in gaining unprecedented measurements and operational insights where they’re needed most, please join me in welcoming Phil McRell, General Manager of the IoT Consortia at PTC; Gavin Hill, IoT Marketing Engineer for Northern Europe at National Instruments (NI) in London, and Olivier Frank, Senior Director of Worldwide Business Development and Sales for Edgeline IoT Systems at Hewlett Packard Enterprise (HPE). The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving this need for a different approach to computing when we think about IoT and we think about the “edge” of organizations? Why is this becoming such a hot issue?

McRell: There are several drivers, but the most interesting one is economics. In the past, the costs that would have been required to take an operational site — a mine, a refinery, or a factory — and do serious predictive analysis, meant you would have to spend more money than you would get back.

For very high-value assets — assets that are millions or tens of millions of dollars — you probably do have some systems in place in these facilities. But once you get a little bit lower in the asset class, there really isn’t a return on investment (ROI) available. What we’re seeing now is that’s all changing based on the type of technology available.

Gardner: So, in essence, we have this whole untapped tier of technologies that we haven’t been able to get a machine-to-machine (M2M) benefit from for gathering information — or the next stage, which is analyzing that information. How big an opportunity is this? Is this a step change, or is this a minor incremental change? Why is this economically a big deal, Olivier?

Frank

Frank: We’re talking about Industry 4.0, the fourth generation of change — after steam, after the Internet, after the cloud, and now this application of IoT to the industrial world. It’s changing at multiple levels. It’s what’s happening within the factories and within this ecosystem of suppliers to the manufacturers, and the interaction with consumers of those suppliers and customers. There’s connectivity to those different parties that we can then put together.

While our customers have been doing process automation for 40 years, what we’re doing together is unleashing the IT standardization, taking technologies that were in the data centers and applying them to the world of process automation, or opening up.

The analogy is what happened when mainframes were challenged by mini computers and then by PCs. It’s now open architecture in a world that has been closed.

Gardner: Phil mentioned ROI, Gavin. What is it about the technology price points and capabilities that have come down to the point where it makes sense now to go down to this lower tier of devices and start gathering information?

Hill

Hill: There are two pieces to that. The first one is that we’re seeing that understanding more about the IoT world is more valuable than we thought. McKinsey Global Institute did a study that said that by about 2025 we’re going to be in a situation where IoT in the factory space is going to be worth somewhere between $1.2 trillion and $3.7 trillion. That says a lot.

The second piece is that we’re at a stage where we can make technology at a much lower price point. We can put that onto the assets that we have in these industrial environments quite cheaply.

Then, you deal with the real big value, the data. All three of us are quite good at getting the value from our own respective areas of expertise.

Look at someone that we’ve worked with, Jaguar Land Rover. In their production sites, in their power train facilities, they were at a stage where they created an awful lot of data but didn’t do anything with it. About 90 percent of their data wasn’t being used for anything. It doesn’t matter how many sensors you put on something. If you can’t do anything with the data, it’s completely useless.

They have been using techniques similar to what we’ve been doing in our collaborative efforts to gain insight from that data. Now, they’re at a stage where probably 90 percent of their data is usable, and that’s the big change.

Collaboration is key

Gardner: Let’s learn more about your organizations and how you’re working collaboratively, as you mentioned, before we get back into understanding how to go about architecting properly for IoT benefits. Phil, tell us about PTC. I understand you won an award in Barcelona recently.

McRell: That was a collaboration that our three organizations did with a pump and valve manufacturer, Flowserve. As Gavin was explaining, there was a lot of learning that had to be done upfront about what kind of sensors you need and what kind of signals you need off those sensors to come up with accurate predictions.

When we collaborate, we rely heavily on NI for their scientists and engineers to provide their expertise. We really need to consume digital data. We can’t do anything with analog signals and we don’t have the expertise to understand what kind of signals we need. When we obtain that, then with HPE, we can economically crunch that data, provide those predictions, and provide that optimization, because of HPE’s hardware that now can live happily in those production environments.

Gardner: Tell us about PTC specifically; what does your organization do?

McRell: For IoT, we have a complete end-to-end platform that allows everything from the data acquisition gateway with NI all the way up to machine learning, augmented reality, dashboards, and mashups, any sort of interface that might be needed for people or other systems to interact.

In an operational setting, there may be one, two, or dozens of different sources of information. You may have information coming from the programmable logic controllers (PLCs) in a factory and you may have things coming from a Manufacturing Execution System (MES) or an Enterprise Resource Planning (ERP) system. There are all kinds of possible sources. We take that, orchestrate the logic, and then we make that available for human decision-making or to feed into another system.

Gardner: So the applications that PTC is developing are relying upon platforms and the extension of the data center down to the edge. Olivier, tell us about Edgeline and how that fits into this?

Explore
HPE’s Edgeline

IoT Systems

Frank: We came up with this idea of leveraging the enterprise computing excellence that is our DNA within HPE. As our CEO said, we want to be the IT in the IoT.

According to IDC, 40 percent of the IoT computing will happen at the edge. Just to clarify, it’s not an opposition between the edge and the hybrid IT that we have in HPE; it’s actually a continuum. You need to bring some of the workloads to the edge. It’s this notion of time of insight and time of action. The closer you are to what you’re measuring, the more real-time you are.

We came up with this idea. What if we could bring the depth of computing we have in the data center in this sub-second environment, where I need to read this intelligent data created by my two partners here, but also, actuate them and do things with them?

Take the example of an electrical short circuit that for some reason caught fire. You don’t want to send the data to the cloud; you want to take immediate action. This is the notion of real-time, immediate action.

We take the deep compute. We integrate the connectivity with NI. We’re the first platform that has integrated an industry standard called PXI, which allows NI to integrate the great portfolio of sensors and acquisition and analog-to-digital conversion technologies into our systems.

Finally, we bring enterprise manageability. Since we have proliferation of systems, system management at the edge becomes a problem. So, we bring our award-winning and millions-of-licenses sold our Integrated Lights-Out (iLO) that we sell in all our ProLiant servers, and we bring that technology at the edge as well.

Gardner: We have the computing depth from HPE, we have insightful analytics and applications from PTC, what does NI bring to the table? Describe the company for us, Gavin?

Working smarter

Hill: As a company, NI is about a $1.2 billion company worldwide. We get involved in an awful lot of industries. But in the IoT space, where we see ourselves fitting within this collaboration with PTC and HPE, is our ability to make a lot of machines smarter.

There are already some sensors on assets, machines, pumps, whatever they may be on the factory floor, but for older or potentially even some newer devices, there are not natively all the sensors that you need to be able to make really good decisions based on that data. To be able to feed in to the PTC systems, the HPE systems, you need to have the right type of data to start off with.

We have the data acquisition and control units that allow us to take that data in, but then do something smart with it. Using something like our CompactRIO System, or as you described, using the PXI platform with the Edgeline products, we can add a certain level of understanding and just a smart nature to these potentially dumb devices. It allows us not only to take in signals, but also potentially control the systems as well.

We not only have some great information from PTC that lets us know when something is going to fail, but we could potentially use their data and their information to allow us to, let’s say, decide to run a pump at half load for a little bit longer. That means that we could get a maintenance engineer out to an oil rig in an appropriate time to fix it before it runs to failure. We have the ability to control as well as to read in.

The other piece of that is that sensor data is great. We like to be as open as possible in taking from any sensor vendor, any sensor provider, but you want to be able to find the needle in the haystack there. We do feature extraction to try and make sure that we give the important pieces of digital data back to PTC, so that can be processed by the HPE Edgeline system as well.

Explore
HPE’s Edgeline

IoT Systems

Frank: This is fundamental. Capturing the right data is an art and a science and that’s really what NI brings, because you don’t want to capture noise; it’s proliferation of data. That’s a unique expertise that we’re very glad to integrate in the partnership.

Gardner: We certainly understand the big benefit of IoT extending what people have done with operational efficiency over the years. We now know that we have the technical capabilities to do this at an acceptable price point. But what are the obstacles, what are the challenges that organizations still have in creating a true data-driven edge, an IoT rich environment, Phil?

Economic expertise

McRell: That’s why we’re together in this consortium. The biggest obstacle is that because there are so many different requirements for different types of technology and expertise, people can become overwhelmed. They’ll spend months or years trying to figure this out. We come to the table with end-to-end capability from sensors and strategy and everything in between, pre-integrated at an economical price point.

Speed is important. Many of these organizations are seeing the future, where they have to be fast enough to change their business model. For instance, some OEM discrete manufacturers are going to have to move pretty quickly from just offering product to offering service. If somebody is charging $50 million for capital equipment, and their competitor is charging $10 million a year and the service level is actually better because they are much smarter about what those assets are doing, the $50 million guy is going to go out of business.

McRell

We come to the table with the ability to come and quickly get that factory, get those assets smart and connected, make sure the right people, parts, and processes are brought to bear at exactly the right time. That drives all the things people are looking for — the up-time, the safety, the yield,  and performance of that facility. It comes down to the challenge, if you don’t have all the right parties together with that technology and expertise, you can very easily get stuck on something that takes a very long time to unravel.

Gardner: That’s very interesting when you move from a Capital Expenditure (CAPEX) to an Operational Expenditure (OPEX) mentality. Every little bit of that margin goes to your bottom line and therefore you’re highly incentivized to look for whole new categories of ways to improve process efficiency.

Any other hurdles, Olivier, that you’re trying to combat effectively with the consortium?

Frank: The biggest hurdle is the level of complexity, and our customers don’t know where to start. So, the promise of us working together is really to show the value of this kind of open architecture injected into a 40-year-old process automation infrastructure and demonstrate, as we did yesterday with our robot powered by our HPE Edgeline is this idea that I can show immediate value to the plant manager, to the quality manager, to the operation manager using the data that resides in that factory already, and that 70 percent or more is unused. That’s the value.

So how do you get that quickly and simply? That’s what we’re working to solve so that our customers can enjoy the benefit of the technology faster and faster.

Bridge between OT and IT

Gardner: Now, this is a technology implementation, but it’s done in a category of the organization that might not think of IT in the same way as the business side — back office applications and data processing. Is the challenge for many organizations a cultural one, where the IT organization doesn’t necessarily know and understand this operational efficiency equation and vice versa, and how are we bridging that?

Hill: I’m probably going to give you the high-level end from the operational technology (OT) side as well. These guys will definitely have more input from their own domain of expertise. But, that these guys have that piece of information for that part that they know well is exactly why this collaboration works really well.

You have situations with the idea of the IoT, where a lot of people stood up and said, “Yeah, I can provide a solution. I have the answer,” but without having a plan — never mind a solution. But we’ve done a really good job of understanding that we can do one part of this system, this solution, really well, and if we partner with the people who are really good in the other aspects, we provide real solutions to customers. I don’t think anyone can compete with us with at this stage, and that is exactly why we’re in this situation.

Frank: Actually, the biggest hurdle is more on the OT side, not really relying on the IT of the company. For many of our customers, the factory’s a silo. At HPE, we haven’t been selling too much to that environment. That’s also why, when working as a consortium, it’s important to get to the right audience, which is in the factory. We also bring our IT expertise, especially in the areas of security, because at the moment, when you put an IT device in an OT environment, you potentially have problems that you didn’t have before.

We’re living in a closed world, and now the value is to open up. Bringing our security expertise, our managed service, our services competencies to that problem is very important.

Speed and safety out in the open

Hill: There was a really interesting piece in the HPE Discover keynote in December, when HPE Aruba started to talk about how they had an issue when they started bringing conferencing and technology out, and then suddenly everything wanted to be wireless. They said, “Oh, there’s a bit of a security issue here now, isn’t there? Everything is out there.”

We can see what HPE has contributed to helping them from that side. What we’re talking about here on the OT side is a similar state from the security aspect, just a little bit further along in the timeline, and we are trying to work on that as well. Again, we have HPE here and they have a lot of experience in similar transformations.

Frank: At HPE, as you know, we have our Data Center and Hybrid Cloud Group and then we have our Aruba Group. When we do OT or our Industrial IoT, we bring the combination of those skills.

For example, in security, we have HPE Aruba ClearPass technology that’s going to secure the industrial equipment back to the network and then bring in wireless, which will enable the augmented-reality use cases that we showed onstage yesterday. It’s a phased approach, but you see the power of bringing ubiquitous connectivity into the factory, which is a challenge in itself, and then securely connecting the IT systems to this OT equipment, and you understand better the kind of the phases and the challenges of bringing the technology to life for our customers.

McRell: It’s important to think about some of these operational environments. Imagine a refinery the size of a small city and having to make sure that you have the right kind of wireless signal that’s going to make it through all that piping and all those fluids, and everything is going to work properly. There’s a lot of expertise, a lot of technology, that we rely on from HPE to make that possible. That’s just one slice of that stack where you can really get gummed up if you don’t have all the right capabilities at the table right from the beginning.


Gardner: We’ve also put this in the context of IoT not at the edge isolated, but in the context of hybrid computing and taking advantage of what the cloud can offer. It seems to me that there’s also a new role here for a constituency to be brought to the table, and that’s the data scientists in the organization, a new trove of data, elevated abstraction of analytics. How is that progressing? Are we seeing the beginnings of taking IoT data and integrating that, joining that, analyzing that, in the context of data from other aspects of the company or even external datasets?

McRell: There are a couple of levels. It’s important to understand that when we talk about the economics, one of the things that has changed quite a bit is that you can actually go in, get assets connected, and do what we call anomaly detection, pretty simplistic machine learning, but nonetheless, it’s a machine-learning capability.

In some cases, we can get that going in hours. That’s a ground zero type capability. Over time, as you learn about a line with multiple assets, about how all these function together, you learn how the entire facility functions, and then you compare that across multiple facilities, at some point, you’re not going to be at the edge anymore. You’re going to be doing a systems type analytics, and that’s different and combined.

At that point, you’re talking about looking across weeks, months, years. You’re going to go into a lot of your back-end and maybe some of your IT systems to do some of that analysis. There’s a spectrum that goes back down to the original idea of simply looking for something to go wrong on a particular asset.

The distinction I’m making here is that, in the past, you would have to get a team of data scientists to figure out almost asset by asset how to create the models and iterate on that. That’s a lengthy process in and of itself. Today, at that ground-zero level, that’s essentially automated. You don’t need a data scientist to get that set up. At some point, as you go across many different systems and long spaces of time, you’re going to pull in additional sources and you will get data scientists involved to do some pretty in-depth stuff, but you actually can get started fairly quickly without that work.

The power of partnership

Frank: To echo what Phil just said, in HPE we’re talking about the tri-hybrid architecture — the edge, so let’s say close to the things; the data center; and then the cloud, which would be a data center that you don’t know where it is. It’s kind of these three dimensions.

The great thing partnering with PTC is that the ThingWorx platform, the same platform, can run in any of those three locations. That’s the beauty of our HPE Edgeline architecture. You don’t need to modify anything. The same thing works, whether we’re in the cloud, in the data center, or on the Edgeline.

To your point about the data scientists, it’s time-to-insight. There are things you want to do immediately, and as Phil pointed out, the notion of anomaly detection that we’re demonstrating on the show floor is understanding those nominal parameters after a few hours of running your thing, and simply detecting something going off normal. That doesn’t require data scientists. That takes us into the ThingWorx platform.

Explore
HPE’s Edgeline

IoT Systems

But then, to the industrial processes, we’re involving systems integration partners and using our own knowledge to bring to the mix along with our customers, because they own the intelligence of their data. That’s where it creates a very powerful solution.

Gardner: I suppose another benefit that the IT organization can bring to this is process automation and extension. If you’re able to understand what’s going on in the device, not only would you need to think about how to fix that device at the right time — not too soon, not too late — but you might want to look into the inventory of the part, or you might want to extend it to the supply chain if that inventory is missing, or you might want to analyze the correct way to get that part at the lowest price or under the RFP process. Are we starting to also see IT as a systems integrator or in a process integrator role so that the efficiency can extend deeply into the entire business process?

McRell: It’s interesting to see how this stuff plays out. Once you start to understand in your facility — or maybe it’s not your facility, maybe you are servicing someone’s facility — what kind of inventory should you have on hand, what should you have globally in a multi-tier, multi-echelon system, it opens up a lot of possibilities.

Today PTC provides a lot of network visibility, a lot of spare-parts inventory, management, and systems, but there’s a limit to what these algorithms can do. They’re really the best that’s possible at this point, except when you now have everything connected. That feedback loop allows you to modify all your expectations in real time, get things on the move proactively so the right person and parts, process, kit, all show up at the right time.

Then, you have augmented reality and other tools, so that maybe somebody hasn’t done this service procedure before, maybe they’ve never seen these parts before, but they have a guided walk-through and have everything showing up all nice and neat the day of, without anybody having to actually figure that out. That’s a big set of improvements that can really change the economics of how these facilities run.

Connecting the data

Gardner: Any other thoughts on process integration?

Frank: Again, the premise behind industrial IoT is indeed, as you’re pointing out, connecting the consumer, the supplier, and the manufacturer. That’s why you have also the emergence of a low-power communication layer, like LoRa or Sigfox, that really can bring these millions of connected devices together and inject them into the systems that we’re creating.

Hill: Just from the conversation, I know that we’re all really passionate about this. IoT and the industrial IoT is really just a great topic for us. It’s so much bigger than what we’re talking about. You’ve talked a little bit about security, you have asked us about the cloud, you have asked us about the integration of the inventory and to the production side, and it is so much bigger than what we are talking about now.

We probably could have twice this long of a conversation on any one of these topics and still never get halfway to the end of it. It’s a really exciting place to be right now. And the really interesting thing that I think all of us are now realizing, the way that we have made advancements as a partnership as well is that you don’t know what you don’t know. A lot of companies are waking up to that as well, and we’re using our collaborations to allow us to know what we don’t know

Frank: Which is why speed is so important. We can theorize and spend a lot of time in R&D, but the reality is, bring those systems to our customers, and we learn new use cases and new ways to make the technology advance.


Hill: The way that technology has gone, no one releases a product anymore — that’s the finished piece, and that is going to stay there for 20, 30 years. That’s not what happens. Products and services are being provided that get constantly updated. How many times a week does your phone update with different pieces of firmware, the app is being updated. You have to be able to change and take the data that you get to adjust everything that’s going on. Otherwise you will not stay ahead of the market.

And that’s exactly what Phil described earlier when he was talking about whether you sell a product or a service that goes alongside a set of products. For me, one of the biggest things is that constant innovation — where we are going. And we’ve changed. We were in kind of a linear motion of progression. In the last little while, we’ve seen a huge amount of exponential growth in these areas.

We had a video at the end of the London HPE Discover keynote, where it was one of HPE’s pieces of what the future could be. We looked at it and thought it was quite funny. There was an automated suitcase that would follow you after you left the airport. I started to laugh at that, but then I took a second and I realized that maybe that’s not as ridiculous as it sounds, because we as humans think linearly. That’s incumbent upon us. But if the technology is changing in an exponential way, that means that we physically cannot ignore some of the most ridiculous ideas that are out there, because that’s what’s going to change the industry.

And even by having that video there and by seeing what PTC is doing with the development that they have and what we ourselves are doing in trying out different industries and different applications, we see three companies that are constantly looking through what might happen next and are ready to pounce on that to take advantage of it, each with their own expertise.

Gardner: We’re just about out of time, but I’d like to hear a couple of ridiculous examples — pushing the envelope of what we can do with these sorts of technologies now. We don’t have much time, so less than a minute each, if you can each come up perhaps with one example, named or unnamed, that might have seemed ridiculous at the time, but in hindsight has proven to be quite beneficial and been productive. Phil?

McRell: You can do this as engineering with us, you can do this in service, but we’ve been talking a lot about manufacturing. In a manufacturing journey, the opportunity, as Gavin and Olivier are describing here, is at the level of what happened between pre- and post-electricity. How fast things will run, the quality at which they will produce products, and then therefore the business model that now you can have because of that capability. These are profound changes. You will see up-times in some of the largest factories in the world go up double digits. You will see lines run multiple times faster over time.

These are things that, if you just walked in today and walked in in a couple of years to some of the people who run the hardest, it would be really hard to believe what your eyes are seeing at that point, just like somebody who was around before factories had electricity would be astounded by what they see today.

Back to the Future


Gardner: One of the biggest issues at the most macro level in economics is the fact that productivity has plateaued for the past 10 or 15 years. People want to get back to what productivity was — 3 or 4 percent a year. This sounds like it might be a big part of getting there. Olivier, an example?

Frank: Well, an example would be more like an impact on mankind and wealth for humanity. Think about that with those technologies combined with 3D printing, you can have new class of manufacturers anywhere in the world — in Africa, for example. With real-time engineering, some of the concepts that we are demonstrating today, you have designing.

Another part of PTC is Computer-Aided Design (CAD) systems and Product Lifecycle Management (PLM), and we’re showing real-time engineering on the floor again. You design those products and you do quick prototyping with your 3D printing. That could be anywhere in the world. And you have your users testing the real thing, understanding whether your engineering choices were relevant, if there are some differences between the digital model and the physical model, this digital twin ID.

Then, you’re back to the drawing board. So, a new class of manufacturers that we don’t even know, serving customers across the world and creating wealth in areas that are (not) up to date, not industrialized.


Gardner: It’s interesting that if you have a 3D printer you might not need to worry about inventory or supply chain.

Hill: Just to add on that one point, the bit that really, really excites me about where we are with technology, as a whole, not even just within the collaboration, you have 3D printing, you have the availability of open software. We all provide very software-centric products, stuff that you can adjust yourself, and that is the way of the future.

That means that among the changes that we see in the manufacturing industry, the next great idea could come from someone who has been in the production plant for 20 years, or it could come from Phil who works in the bank down the road, because at a really good price point, he has the access to that technology, and that is one of the coolest things that I can think about right now.

Where we’ve seen this sort of development and this use of these sort of technologies and implementations and seen a massive difference, look at someone like Duke Energy in the US. We worked with them before we realized where our capabilities were, never mind how we could implement a great solution with PTC and with HPE. Even there, based on our own technology, those guys in the para-production side of things in some legacy equipment decided to try and do this sort of application, to have predictive maintenance to be able to see what’s going on in their assets, which are across the continent.

They began this at the start of 2013 and they have seen savings of an estimated $50 billion up to this point. That’s a number.

Listen to the podcast. Find it on iTunes. Get the mobile appRead a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Sumo Logic CEO on how modern apps benefit from ‘continuous intelligence’ and DevOps insights

The next BriefingsDirect applications health monitoring interview explores how a new breed of continuous intelligence emerges by gaining data from systems infrastructure logs — either on-premises or in the cloud — and then cross-referencing that with intrinsic business metrics information.

We’ll now explore how these new levels of insight and intelligence into what really goes on underneath the covers of modern applications help ensure that apps are built, deployed, and operated properly.

Today, more than ever, how a company’s applications perform equates with how the company itself performs and is perceived. From airlines to retail, from finding cabs to gaming, how the applications work deeply impacts how the business processes and business outcomes work, too.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

We’re joined by an executive from Sumo Logic to learn why modern applications are different, what’s needed to make them robust and agile, and how the right mix of data, metrics and machine learning provides the means to make and keep apps operating better than ever.

To describe how to build and maintain the best applications, welcome Ramin Sayar, President and CEO of Sumo Logic. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s no doubt that the apps make the company, but what is it about modern applications that makes them so difficult to really know? How is that different from the applications we were using 10 years ago?

Sayar: You hit it on the head a little bit earlier. This notion of always-on, always-available, always-accessible types of applications, either delivered through rich web mobile interfaces or through traditional mechanisms that are served up through laptops or other access points and point-of-sale systems are driving a next wave of technology architecture supporting these apps.

These modern apps are around a modern stack, and so they’re using new platform services that are created by public-cloud providers, they’re using new development processes such as agile or continuous delivery, and they’re expected to constantly be learning and iterating so they can improve not only the user experience — but the business outcomes.

Gardner: Of course, developers and business leaders are under pressure, more than ever before, to put new apps out more quickly, and to then update and refine them on a continuous basis. So this is a never-ending process.

User experience

Sayar: You’re spot on. The obvious benefits around always on is centered on the rich user interaction and user experience. So, while a lot of the conversation around modern apps tends to focus on the technology and the components, there are actually fundamental challenges in the process of how these new apps are also built and managed on an ongoing basis, and what implications that has for security. A lot of times, those two aspects are left out when people are discussing modern apps.

Sayar

Gardner: That’s right. We’re now talking so much about DevOps these days, but in the same breath, we’re taking about SecOps — security and operations. They’re really joined at the hip.

Sayar: Yes, they’re starting to blend. You’re seeing the technology decisions around public cloud, around Docker and containers, and microservices and APIs, and not only led by developers or DevOps teams. They’re heavily influenced and partnering with the SecOps and security teams and CISOs, because the data is distributed. Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements (SLAs).

Gardner: What’s different from say 10 years ago? Distributed used to mean that I had, under my own data-center roof, an application that would be drawing from a database, using an application server, perhaps a couple of services, but mostly all under my control. Now, it’s much more complex, with many more moving parts.

Sayar: We like to look at the evolution of these modern apps. For example, a lot of our customers have traditional monolithic apps that follow the more traditional waterfall approach for iterating and release. Often, those are run on bare-metal physical servers, or possibly virtual machines (VMs). They are simple, three-tier web apps.

Access the Webinar
On Gaining Operational Visibility
Into AWS

We see one of two things happening. The first is that there is a need for either replacing the front end of those apps, and we refer to those as brownfield. They start to change from waterfall to agile and they start to have more of an N-tier feel. It’s really more around the front end. Maybe your web properties are a good example of that. And they start to componentize pieces of their apps, either on VMs or in private clouds, and that’s often good for existing types of workloads.

The other big trend is this new way of building apps, what we call greenfield workloads, versus the brownfield workloads, and those take a fundamentally different approach.

Often it’s centered on new technology, a stack entirely using microservices, API-first development methodology, and using new modern containers like Docker, Mesosphere, CoreOS, and using public-cloud infrastructure and services from Amazon Web Services (AWS), or Microsoft Azure. As a result, what you’re seeing is the technology decisions that are made there require different skill sets and teams to come together to be able to deliver on the DevOps and SecOps processes that we just mentioned.

Gardner: Ramin, it’s important to point out that we’re not just talking about public-facing business-to-consumer (B2C) apps, not that those aren’t important, but we’re also talking about all those very important business-to-business (B2B) and business-to-employee (B2E) apps. I can’t tell you how frustrating it is when you get on the phone with somebody and they say, “Well, I’ll help you, but my app is down,” or the data isn’t available. So this is not just for the public facing apps, it’s all apps, right?

It’s a data problem

Sayar: Absolutely. Regardless of whether it’s enterprise or consumer, if it’s mid-market small and medium business (SMB) or enterprise that you are building these apps for, what we see from our customers is that they all have a similar challenge, and they’re really trying to deal with the volume, the velocity, and the variety of the data around these new architectures and how they grapple and get their hands around it. At the end of day, it becomes a data problem, not just a process or technology problem.

Gardner: Let’s talk about the challenges then. If we have many moving parts, if we need to do things faster, if we need to consider the development lifecycle and processes as well as ongoing security, if we’re dealing with outside third-party cloud providers, where do we go to find the common thread of insight, even though we have more complexity across more organizational boundaries?

Sayar: From a Sumo Logic perspective, we’re trying to provide full-stack visibility, not only from code and your repositories like GitHub or Jenkins, but all the way through the components of your code, to API calls, to what your deployment tools are used for in terms of provisioning and performance.

We spend a lot of effort to integrate to the various DevOps tool chain vendors, as well as provide the holistic view of what users are doing in terms of access to those applications and services. We know who has checked in which code or which branch and which build created potential issues for the performance, latency, or outage. So we give you that 360-view by providing that full stack set of capabilities.

Gardner: So, the more information the better, no matter where in the process, no matter where in the lifecycle. But then, that adds its own level of complexity. I wonder is this a fire-hose approach or boiling-the-ocean approach? How do you make that manageable and then actionable?

Sayar: We’ve invested quite a bit of our intellectual property (IP) on not only providing integration with these various sources of data, but also a lot in the machine learning  and algorithms, so that we can take advantage of the architecture of being a true cloud native multitenant fast and simple solution.

So, unlike others that are out there and available for you, Sumo Logic’s architecture is truly cloud native and multitenant, but it’s centered on the principle of near real-time data streaming.

As the data is coming in, our data-streaming engine is allowing developers, IT ops administrators, sys admins, and security professionals to be able to have their own view, coarse-grained or granular-grained, from our back controls that we have in the system to be able to leverage the same data for different purposes, versus having to wait for someone to create a dashboard, create a view, or be able to get access to a system when something breaks.

Gardner: That’s interesting. Having been in the industry long enough, I remember when logs basically meant batch. You’d get a log dump, and then you would do something with it. That would generate a report, many times with manual steps involved. So what’s the big step to going to streaming? Why is that an essential part of making this so actionable?

Sayar: It’s driven based on the architectures and the applications. No longer is it acceptable to look at samples of data that span 5 or 15 minutes. You need the real-time data, sub-second, millisecond latency to be able to understand causality, and be able to understand when you’re having a potential threat, risk, or security concern, versus code-quality issues that are causing potential performance outages and therefore business impact.

The old way was hope and pray, when I deployed code, that I would find something when a user complains is no longer acceptable. You lose business and credibility, and at the end of the day, there’s no real way to hold developers, operations folks, or security folks accountable because of the legacy tools and process approach.

Center of the business

Those expectations have changed, because of the consumerization of IT and the fact that apps are the center of the business, as we’ve talked about. What we really do is provide a simple way for us to analyze the metadata coming in and provide very simple access through APIs or through our user interfaces based on your role to be able to address issues proactively.

Conceptually, there’s this notion of wartime and peacetime as we’re building and delivering our service. We look at the problems that users — customers of Sumo Logic and internally here at Sumo Logic — are used to and then we break that down into this lifecycle — centered on this concept of peacetime and wartime.

Peacetime is when nothing is wrong, but you want to stay ahead of issues and you want to be able to proactively assess the health of your service, your application, your operational level agreements, your SLAs, and be notified when something is trending the wrong way.

Then, there’s this notion of wartime, and wartime is all hands on deck. Instead of being alerted 15 minutes or an hour after an outage has happened or security risk and threat implication has been discovered, the real-time data-streaming engine is notifying people instantly, and you’re getting PagerDuty alerts, you’re getting Slack notifications. It’s no longer the traditional helpdesk notification process when people are getting on bridge lines.

Because the teams are often distributed and it’s shared responsibility and ownership for identifying an issue in wartime, we’re enabling collaboration and new ways of collaboration by leveraging the integrations to things like Slack, PagerDuty notification systems through the real-time platform we’ve built.

So, the always-on application expectations that customers and consumers have, have now been transformed to always-on available development and security resources to be able to address problems proactively.

Gardner: It sounds like we’re able to not only take the data and information in real time from the applications to understand what’s going on with the applications, but we can take that same information and start applying it to other business metrics, other business environmental impacts that then give us an even greater insight into how to manage the business and the processes. Am I overstating that or is that where we are heading here?

Sayar: That’s exactly right. The essence of what we provide in terms of the service is a platform that leverages the machine logs and time-series data from a single platform or service that eliminates a lot of the complexity that exists in traditional processes and tools. No longer do you need to do “swivel-chair” correlation, because we’re looking at multiple UIs and tools and products. No longer do you have to wait for the helpdesk person to notify you. We’re trying to provide that instant knowledge and collaboration through the real-time data-streaming platform we’ve built to bring teams together versus divided.

Gardner: That sounds terrific if I’m the IT guy or gal, but why should this be of interest to somebody higher up in the organization, at a business process, even at a C-table level? What is it about continuous intelligence that cannot only help apps run on time and well, but help my business run on time and well?

Need for agility

Sayar: We talked a little bit about the whole need for agility. From a business point of view, the line-of-business folks who are associated with any of these greenfield projects or apps want to be able to increase the cycle times of the application delivery. They want to have measurable results in terms of application changes or web changes, so that their web properties have either increased or potentially decreased in terms of user satisfaction or, at the end of the day, business revenue.

So, we’re able to help the developers, the DevOps teams, and ultimately, line of business deliver on the speed and agility needs for these new modes. We do that through a single comprehensive platform, as I mentioned.

At the same time, what’s interesting here is that no longer is security an afterthought. No longer is security in the back room trying to figure out when a threat or an attack has happened. Security has a seat at the table in a lot of boardrooms, and more importantly, in a lot of strategic initiatives for enterprise companies today.

At the same time we’re helping with agility, we’re also helping with prevention. And so a lot of our customers often start with the security teams that are looking for a new way to be able to inspect this volume of data that’s coming in — not at the infrastructure level or only the end-user level — but at the application and code level. What we’re really able to do, as I mentioned earlier, is provide a unifying approach to bring these disparate teams together.

Download the State
Of Modern Applications
In AWS Report

Gardner: And yet individuals can extract the intelligence view that best suits what their needs are in that moment.

Sayar: Yes. And ultimately what we’re able to do is improve customer experience, increase revenue-generating services, increase efficiencies and agility of actually delivering code that’s quality and therefore the applications, and lastly, improve collaboration and communication.

Gardner: I’d really like to hear some real world examples of how this works, but before we go there, I’m still interested in the how. As to this idea of machine learning, we’re hearing an awful lot today about bots, artificial intelligence (AI), and machine learning. Parse this out a bit for me. What is it that you’re using machine learning  for when it comes to this volume and variety in understanding apps and making that useable in the context of a business metric of some kind?

Sayar: This is an interesting topic, because of a lot of noise in the market around big data or machine learning and advanced analytics. Since Sumo Logic was started six years ago, we built this platform to ensure that not only we have the best in class security and encryption capabilities, but it was centered on the fundamental purpose around democratizing analytics, making it simpler to be able to allow more than just a subset of folks get access to information for their roles and responsibilities, whether you’re security, ops, or development teams.

To answer your question a little bit more succinctly, our platform is predicated on multiple levels of machine learning and analytics capabilities. Starting at the lowest level, something that we refer to as LogReduce is meant to separate the signal-to-noise ratio. Ultimately, it helps a lot of our users and customers reduce mean time to identification by upwards of 90 percent, because they’re not searching the irrelevant data. They’re searching the relevant and oftentimes occurring data that’s not frequent or not really known, versus what’s constantly occurring in their environment.

In doing so, it’s not just about mean time to identification, but it’s also how quickly we’re able to respond and repair. We’ve seen customers using LogReduce reduce the mean time to resolution by upwards of 50 percent.

Predictive capabilities

Our core analytics, at the lowest level, is helping solve operational metrics and value. Then, we start to become less reactive. When you’ve had an outage or a security threat, you start to leverage some of our other predictive capabilities in our stack.

For example, I mentioned this concept of peacetime and wartime. In the notion of peacetime, you’re looking at changes over time when you’ve deployed code and/or applications to various geographies and locations. A lot of times, developers and ops folks that use Sumo want to use log compare or outlier predictor operators that are in their machine learning capabilities to show and compare differences of branches of code and quality of their code to relevancy around performance and availability of the service and app.

We allow them, with a click of a button, to compare this window for these events and these metrics for the last hour, last day, last week, last month, and compare them to other time slices of data and show how much better or worse it is. This is before deploying to production. When they look at production, we’re able to allow them to use predictive analytics to look at anomalies and abnormal behavior to get more proactive.

So, reactive, to proactive, all the way to predictive is the philosophy that we’ve been trying to build in terms of our analytics stack and capabilities.

Gardner: How are some actual customers using this and what are they getting back for their investment?

Sayar: We have customers that span retail and e-commerce, high-tech, media, entertainment, travel, and insurance. We’re well north of 1,200 unique paying customers, and they span anyone from Airbnb, Anheuser-Busch, Adobe, Metadata, Marriott, Twitter, Telstra, Xora — modern companies as well as traditional companies.

What do they all have in common? Often, what we see is a digital transformation project or initiative. They either have to build greenfield or brownfield apps and they need a new approach and a new service, and that’s where they start leveraging Sumo Logic.

Second, what we see is that’s it’s not always a digital transformation; it’s often a cost reduction and/or a consolidation project. Consolidation could be tools or infrastructure and data center, or it could be migration to co-los or public-cloud infrastructures.

The nice thing about Sumo Logic is that we can connect anything from your top of rack switch, to your discrete storage arrays, to network devices, to operating system, and middleware, through to your content-delivery network (CDN) providers and your public-cloud infrastructures.

As it’s a migration or consolidation project, we’re able to help them compare performance and availability, SLAs that they have associated with those, as well as differences in terms of delivery of infrastructure services to the developers or users.

So whether it’s agility-driven or cost-driven, Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: Ramin, how about a couple of concrete examples of what you were just referring to.

Cloud migration

Sayar: One good example is in the media space or media and entertainment space, for example, Hearst Media. They, like a lot of our other customers, were undergoing a digital-transformation project and a cloud-migration project. They were moving about 36 apps to AWS and they needed a single platform that provided machine-learning analytics to be able to recognize and quickly identify performance issues prior to making the migration and updates to any of the apps rolling over to AWS. They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Another example would be JetBlue. We do a lot in the travel space. JetBlue is also another AWS and cloud customer. They provide a lot of in-flight entertainment to their customers. They wanted to be able to look at the service quality for the revenue model for the in-flight entertainment system and be able to ascertain what movies are being watched, what’s the quality of service, whether that’s being degraded or having to charge customers more than once for any type of service outages. That’s how they’re using Sumo Logic to better assess and manage customer experience. It’s not too dissimilar from Alaska Airlines or others that are also providing in-flight notification and wireless type of services.

The last one is someone that we’re all pretty familiar with and that’s Airbnb. We’re seeing a fundamental disruption in the travel space and how we reserve hotels or apartments or homes, and Airbnb has led the charge, like Uber in the transportation space. In their case, they’re taking a lot of credit-card and payment-processing information. They’re using Sumo Logic for payment-card industry (PCI) audit and security, as well as operational visibility in terms of their websites and presence.

Gardner: It’s interesting. Not only are you giving them benefits along insight lines, but it sounds to me like you’re giving them a green light to go ahead and experiment and then learn very quickly whether that experiment worked or not, so that they can find refine. That’s so important in our digital business and agility drive these days.

Sayar: Absolutely. And if I were to think of another interesting example, Anheuser-Busch is another one of our customers. In this case, the CISO wanted to have a new approach to security and not one that was centered on guarding the data and access to the data, but providing a single platform for all constituents within Anheuser-Busch, whether security teams, operations teams, developers, or support teams.

We did a pilot for them, and as they’re modernizing a lot of their apps, as they start to look at the next generation of security analytics, the adoption of Sumo started to become instant inside AB InBev. Now, they’re looking at not just their existing real estate of infrastructure and apps for all these teams, but they’re going to connect it to future projects such as the Connected Path, so they can understand what the yield is from each pour in a particular keg in a location and figure out whether that’s optimized or when they can replace the keg.

So, you’re going from a reactive approach for security and processes around deployment and operations to next-gen connected Internet of Things (IoT) and devices to understand business performance and yield. That’s a great example of an innovative company doing something unique and different with Sumo Logic.

Gardner: So, what happens as these companies modernize and they start to avail themselves of more public-cloud infrastructure services, ultimately more-and-more of their apps are going to be of, by, and for somebody else’s public cloud? Where do you fit in that scenario?

Data source and location

Sayar: Whether you’re running on-prem, whether you’re running co-los, whether you’re running through CDN providers like Akamai, whether you’re running on AWS or Azure, Heroku, whether you’re running SaaS platforms and renting a single platform that can manage and ingest all that data for you. Interestingly enough, about half our customers’ workloads run on-premises and half of them run in the cloud.

We’re agnostic to where the data is or where their applications or workloads reside. The benefit we provide is the single ubiquitous platform for managing the data streams that are coming in from devices, from applications, from infrastructure, from mobile to you, in a simple, real-time way through a multitenant cloud service.

Gardner: This reminds me of what I heard, 10 or 15 years ago about business intelligence (BI), drawing data, analyzing it, making it close to being proactive in its ability to help the organization. How is continuous intelligence different, or even better, and something that would replace what we refer to as BI?

Sayar: The issue that we faced with the first generation of BI was it was very rear-view and mirror-centric, meaning that it was looking at data and things in the past. Where we’re at today with this need for speed and the necessity to be always on, always available, the expectation is that it’s sub-millisecond latency to understand what’s going on, from a security, operational, or user-experience point of view.

I’d say that we’re on V2 or next generation of what was traditionally called BI, and we refer to that as continuous intelligence, because you’re continuously adapting and learning. It’s not only based on what humans know and what rules and correlation that they try to presuppose and create alarms and filters and things around that. It’s what machines and machine intelligence needs to supplement that with to provide the best-in-class type of capability, which is what we refer to as continuous intelligence.

Gardner: We’re almost out of time, but I wanted to look to the future a little bit. Obviously, there’s a lot of investing going on now around big data and analytics as it pertains to many different elements of many different businesses, depending on their verticals. Then, we’re talking about some of the logic benefit and continuous intelligence as it applies to applications and their lifecycle.

Where do we start to see crossover between those? How do I leverage what I’m doing in big data generally in my organization and more specifically, what I can do with continuous intelligence from my systems, from my applications?

Business Insights

Sayar: We touched a little bit on that in terms of the types of data that we integrate and ingest. At the end of the day, when we talk about full-stack visibility, it’s from everything with respect to providing business insights to operational insights, to security insights.

We have some customers that are in credit-card payment processing, and they actually use us to understand activations for credit cards, so they’re extracting value from the data coming into Sumo Logic to understand and predict business impact and relevant revenue associated with these services that they’re managing; in this case, a set of apps that run on a CDN.

Try Sumo Logic for Free
To Get Critical Data and Insights
Into Apps and Infrastructure Operations

At the same time, the fraud and risk team are using us for threat and prevention. The operations team is using us for understanding identification of issues proactively to be able to address any application or infrastructure issues, and that’s what we refer to as full stack.

Full stack isn’t just the technology; it’s providing business visibility insights to line the business users or users that are looking at metrics around user experience and service quality, to operational-level impacts that help you become more proactive, or in some cases, reactive to wartime issues, as we’ve talked about. And lastly, the security team helps you take a different security posture around reactive and proactive, around threat, detection, and risk.

In a nutshell, where we see these things starting to converge is what we refer to as full stack visibility around our strategy for continuous intelligence, and that is technology to business to users.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Sumo Logic.

You may also be interested in:

Posted in Business intelligence, DevOps | Tagged , , , , , , , , | Leave a comment

OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

The next BriefingsDirect digital transformation case study explores how UK IT consultancy OCSL has set its sights on the holy grail of hybrid IT — helping its clients to find and attain the right mix of hybrid cloud.

We’ll now explore how each enterprise — and perhaps even units within each enterprise — determines the path to a proper mix of public and private cloud. Closer to home, they’re looking at the proper fit of converged infrastructure, hyper-converged infrastructure (HCI), and software-defined data center (SDDC) platforms.

Implementing such a services-attuned architecture may be the most viable means to dynamically apportion applications and data support among and between cloud and on-premises deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

To describe how to rationalize the right mix of hybrid cloud and hybrid IT services along with infrastructure choices on-premises, we are joined by Mark Skelton, Head of Consultancy at OCSL in London. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: People increasingly want to have some IT on premises, and they want public cloud — with some available continuum between them. But deciding the right mix is difficult and probably something that’s going to change over time. What drivers are you seeing now as organizations make this determination?

Accelerate Your Business
With Hybrid Cloud from HPE
Learn More

Skelton: It’s a blend of lot of things. We’ve been working with enterprises for a long time on their hybrid and cloud messaging. Our clients have been struggling just to understand what hybrid really means, but also how we make hybrid a reality, and how to get started, because it really is a minefield. You look at what Microsoft is doing, what AWS is doing, and what HPE is doing in their technologies. There’s so much out there. How do they get started?

We’ve been struggling in the last 18 months to get customers on that journey and get started. But now, because technology is advancing, we’re seeing customers starting to embrace it and starting to evolve and transform into those things. And, we’ve matured our models and frameworks as well to help customer adoption.

Gardner: Do you see the rationale for hybrid IT shaking down to an economic equation? Is it to try to take advantage of technologies that are available? Is it about compliance and security? You’re probably temped to say all of the above, but I’m looking for what’s driving the top-of-mind decision-making now.

Start with the economics

Skelton: The initial decision-making process begins with the economics. I think everyone has bought into the marketing messages from the public cloud providers saying, “We can reduce your costs, we can reduce your overhead — and not just from a culture perspective, but from management, from personal perspective, and from a technology solutions perspective.”

Skelton

CIOs, and even financial officers, are seeing economics as the tipping point they need to go into a hybrid cloud, or even all into a public cloud. But it’s not always cheap to put everything into a public cloud. When we look at business cases with clients, it’s the long-term investment we look at. Over time, it’s not always cheap to put things into public cloud. That’s where hybrid started to come back into the front of people’s minds.

We can use public cloud for the right workloads and where they want to be flexible and burst and be a bit more agile or even give global reach to long global businesses, but then keep the crown jewels back inside secured data centers where they’re known and trusted and closer to some of the key, critical systems.

So, it starts with the finance side of the things, but quickly evolves beyond that, and financial decisions aren’t the only reasons why people are going to public or hybrid cloud.

Gardner: In a more perfect world, we’d be able to move things back and forth with ease and simplicity, where we could take the A/B testing-type of approach to a public and private cloud decision. We’re not quite there yet, but do you see a day where that choice about public and private will be dynamic — and perhaps among multiple clouds or multi-cloud hybrid environment?

Skelton: Absolutely. I think multi-cloud is the Nirvana for every organization, just because there isn’t one-size-fits-all for every type of work. We’ve been talking about it for quite a long time. The technology hasn’t really been there to underpin multi-cloud and truly make it easy to move on-premises to public or vice versa. But I think now we’re getting there with technology.

Are we there yet? No, there are still a few big releases coming, things that we’re waiting to be released to market, which will help simplify that multi-cloud and the ability to migrate up and back, but we’re just not there yet, in my opinion.

Gardner: We might be tempted to break this out between applications and data. Application workloads might be a bit more flexible across a continuum of hybrid cloud, but other considerations are brought to the data. That can be security, regulation, control, compliance, data sovereignty, GDPR, and so forth. Are you seeing your customers looking at this divide between applications and data, and how they are able to rationalize one versus the other?

Sketon: Applications, as you have just mentioned, are the simpler things to move into a cloud model, but the data is really the crown jewels of the business, and people are nervous about putting that into public cloud. So what we’re seeing lot of is putting applications into the public cloud for the agility, elasticity, and global reach and trying to keep data on-premises because they’re nervous about those breaches in the service providers’ data centers.

That’s what we are seeing, but we are seeing an uprising of things like object storage, so we’re working with Scality, for example, and they have a unique solution for blending public and on-premises solutions, so we can pin things to certain platforms in a secure data center and then, where the data is not quite critical, move it into a public cloud environment.

Gardner: It sounds like you’ve been quite busy. Please tell us about OCSL, an overview of your company and where you’re focusing most of your efforts in terms of hybrid computing.

Rebrand and refresh

Skelton: OCSL had been around for 26 years as a business. Recently, we’ve been through a re-brand and a refresh of what we are focusing on, and we’re moving more to a services organization, leading with our people and our consultants.

We’re focusing on transforming customers and clients into the cloud environment, whether that’s applications or, if it’s data center, cloud, or hybrid cloud. We’re trying to get customers on that journey of transformation and engaging with business-level people and business requirements and working out how we make cloud a reality, rather than just saying there’s a product and you go and do whatever you want with it. We’re finding out what those businesses want, what are the key requirements, and then finding the right cloud models that to fit that.

Gardner: So many organizations are facing not just a retrofit or a rethinking around IT, but truly a digital transformation for the entire organization. There are many cases of sloughing off business lines, and other cases of acquiring. It’s an interesting time in terms of a mass reconfiguration of businesses and how they identify themselves.

Skelton: What’s changed for me is, when I go and speak to a customer, I’m no longer just speaking to the IT guys, I’m actually engaging with the finance officers, the marketing officers, the digital officers — that’s he common one that is creeping up now. And it’s a very different conversation.

Accelerate Your Business
With Hybrid Cloud from HPE
Learn More

We’re looking at business outcomes now, rather than focusing on, “I need this disk, this product.” It’s more: “I need to deliver this service back to the business.” That’s how we’re changing as a business. It’s doing that business consultancy, engaging with that, and then finding the right solutions to fit requirements and truly transform the business.

Gardner: Of course, HPE has been going through transformations itself for the past several years, and that doesn’t seem to be slowing up much. Tell us about the alliance between OCSL and HPE. How do you come together as a whole greater than the sum of the parts?

Skelton: HPE is transforming and becoming a more agile organization, with some of the spinoffs that we’ve had recently aiding that agility. OCSL has worked in partnership with HPE for many years, and it’s all about going to market together and working together to engage with the customers at right level and find the right solutions. We’ve had great success with that over many years.

Gardner: Now, let’s go to the “show rather than tell” part of our discussion. Are there some examples that you can look to, clients that you work with, that have progressed through a transition to hybrid computing, hybrid cloud, and enjoyed certain benefits or found unintended consequences that we can learn from?

Skelton: We’ve had a lot of successes in the last 12 months as I’m taking clients on the journey to hybrid cloud. One of the key ones that resonates with me is a legal firm that we’ve been working with. They were in a bit of a state. They had an infrastructure that was aging, was unstable, and wasn’t delivering quality service back to the lawyers that were trying to embrace technology — so mobile devices, dictation software, those kind of things.

We came in with a first prospectus on how we would actually address some of those problems. We challenged them, and said that we need to go through a stabilization phase. Public cloud is not going to be the immediate answer. They’re being courted by the big vendors, as everyone is, about public cloud and they were saying it was the Nirvana for them.

We challenged that and we got them to a stable platform first, built on HPE hardware. We got instant stability for them. So, the business saw immediate returns and delivery of service. It’s all about getting that impactful thing back to the business, first and foremost.

Building cloud model

Now, we’re working through each of their service lines, looking at how we can break them up and transform them into a cloud model. That involves breaking down those apps, deconstructing the apps, and thinking about how we can use pockets of public cloud in line with the hybrid on-premise in our data-center infrastructure.

They’ve now started to see real innovative solutions taking that business forward, but they got instant stability.

Gardner: Were there any situations where organizations were very high-minded and fanciful about what they were going to get from cloud that may have led to some disappointment — so unintended consequences. Maybe others might benefit from hindsight. What do you look out for, now that you have been doing this for a while in terms of hybrid cloud adoption?

Skelton: One of the things I’ve seen a lot of with cloud is that people have bought into the messaging from the big public cloud vendors about how they can just turn on services and keep consuming, consuming, consuming. A lot of people have gotten themselves into a state where bills have been rising and rising, and the economics are looking ridiculous. The finance officers are now coming back and saying they need to rein that back in. How do they put some control around that?

That’s where hybrid is helping, because if you start to hook up some workloads back in an isolated data center, you start to move some of those workloads back. But the key for me is that it comes down to putting some thought process into what you’re putting into cloud. Just think through to how can you transform and use the services properly. Don’t just turn everything on, because it’s there and it’s click of a button away, but actually think about put some design and planning into adopting cloud.

Gardner: It also sounds like the IT people might need to go out and have a pint with the procurement people and learn a few basics about good contract writing, terms and conditions, and putting in clauses that allow you to back out, if needed. Is that something that we should be mindful of — IT being in the procurement mode as well as specifying technology mode?

Skelton: Procurement definitely needs to be involved in the initial set-up with the cloud  whenever they’re committing to a consumption number, but then once that’s done, it’s IT’s responsibility in terms of how they are consuming that. Procurement needs to be involved all the way through in keeping constant track of what’s going on; and that’s not happening.

The IT guys don’t really care about the cost; they care about the widgets and turning things on and playing around that. I don’t think they really realized how much this is going to cost-back. So yeah, there is a bit of disjoint in lots of organizations in terms of procurement in the upfront piece, and then it goes away, and then IT comes in and spends all of the money.

Gardner: In the complex service delivery environment, that procurement function probably should be constant and vigilant.

Big change in procurement

Skelton: Procurement departments are going to change. We’re starting to see that in some of the bigger organizations. They’re closer to the IT departments. They need to understand that technology and what’s being used, but that’s quite rare at the moment. I think that probably over the next 12 months, that’s going to be a big change in the larger organizations.

Gardner: Before we close, let’s take a look to the future. A year or two from now, if we sit down again, I imagine that more micro services will be involved and containerization will have an effect, where the complexity of services and what we even think of as an application could be quite different, more of an API-driven environment perhaps.

So the complexity about managing your cloud and hybrid cloud to find the right mix, and pricing that, and being vigilant about whether you’re getting your money’s worth or not, seems to be something where we should start thinking about applying artificial intelligence (AI), machine learning, what I like to call BotOps, something that is going to be there for you automatically without human intervention.

Does that sound on track to you, and do you think that we need to start looking to advanced automation and even AI-driven automation to manage this complex divide between organizations and cloud providers?

Skelton: You hit a lot of key points there in terms of where the future is going. I think we are still in this phase if we start trying to build the right platforms to be ready for the future. So we see the recent releases of HPE Synergy for example, being able to support these modern platforms, and that’s really allowing us to then embrace things like micro services. Docker and Mesosphere are two types of platforms that will disrupt organizations and the way we do things, but you need to find the right platform first.

Hopefully, in 12 months, we can have those platforms and we can then start to embrace some of this great new technology and really rethink our applications. And it’s a challenge to the ISPs. They’ve got to work out how they can take advantage of some of these technologies.

Accelerate Your Business
With Hybrid Cloud from HPE
Learn More

We’re seeing a lot of talk about Cervalis and computing. It’s where there is nothing and you need to spin up results as and when you need to. The classic use case for that is Uber; and they have built a whole business on that Cervalis type model. I think that in 12 months time, we’re going to see a lot more of that and more of the enterprise type organizations.

I don’t think we have it quite clear in our minds how we’re going to embrace that but it’s the ISV community that really needs to start driving that. Beyond that, it’s absolutely with AI and bots. We’re all going to be talking to computers, and they’re going to be responding with very human sorts of reactions. That’s the next way.

I am bringing that into enterprise organizations for how we can solve some business challenges. Service test management is one of the use cases where we’re seeing, in some of our clients, whether they can get immediate response from bots and things like that to common queries, so they don’t need as many support staff. It’s already starting to happen.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Hewlett Packard Enterprise | Tagged , , , , , , , , | Leave a comment