SiliconANGLE Extracting the signal from the noise. Fri, 24 Jun 2016 21:21:13 +0000 en-US hourly 1 VictorOps releases feature-rich Incident Automation Engine for DevOps teams Fri, 24 Jun 2016 20:21:31 +0000 This week VictorOps Inc., a real-time incident manageme […]]]>

This week VictorOps Inc., a real-time incident management company producing DevOps solutions, announced the release of the Incident Automation Engine. This new product displays a set of automation features for on-call teams designed to increase mean time to resolution (MTTR) by using intelligent alerts, sophisticated routing and smarter system outputs.

It has always been one of the bugbears of operations that when an incident happens it takes a certain amount of time for that incident to be discovered (often via customers having trouble), extra time for the on-call team to review the reports and logs and finally even more time to affect a proper response that may involve the development team to provide and deploy a patch to resolve the incident. Even more problematic is that part of the above lifecycle could be run through on a false alarm of an incident that didn’t happen, wasting the time of the DevOps team who need to triage every potential problem.

“More and more often, the best way to speed organizational innovation is by enabling developers to move quickly,” said Todd Vernon, VictorOps CEO and co-founder. “When it comes to a company’s uptime, this same concept applies. Today valuable time is lost with archaic alerting systems and manual processes. The [Incident Automation] Engine now becomes a foundation to automate out these inefficiencies, gain clearer access to actionable information and continuously evaluate and improve the inputs driving on-call processes.”

VictorOps expects that the Automated Incident Engine will help reduce the total amount of noise incidents by delivering its own automated triage of any given incident trigger before it is delivered to the DevOps on-call team.

And, when the Engine sees a problem that fits all the criteria of a proper incident, it delivers documentation about similar past solutions, documentation on the systems involved and all communication involved in incident resolution. This means when something goes wrong, everyone who gets involved has a full history of what has been done to resolve it (to date) and can quickly get an idea of who needs to know what.

Key features in the VictorOps Incident Automation Engine

With this release VictorOps described five key features of the Incident Automation Engine: alert automation, alert annotations, outbound webhooks, an API and a post-mortem report generator.

The platform includes alert automation, which automatically routes specific alerts to the correct team members according to who is responsible for what and when it discovers unactionable (or false) alerts it quiets them.

The system provides alert annotations, that automatically appends specialized information to alerts including: runbooks, monitoring graphs, logs and other relevant information in order to provide remediation solutions and better visibility on the incident.

With outbound webhooks the VictorOps platform’s data can be exported into other systems and integrate VictorOps incidents into other service dashboards; while an API is available to extend VictorOps to existing legacy tools and third-party reporting systems.

Finally, there is a post-mortem report generator that automatically pulls a snapshot of all activities (alerts, conversations, and remediation actions) associated with an incident. This provides a DevOps team something to review during their next meeting or a way to get insights into what went well or what when wrong when responding to an incident.

Continued enhancements and investment in VictorOps from the industry

VictorOps continues to produce incident-related platform intelligence for DevOps teams and November last year received $10.6 million USD from The Foundry Group and Costanoa Venture Capital to enhance development of products such as the Automation Incident Engine.

Also this year VictorOps released enhancements to its incident management mobile app platform designed to ease team friction and burnout.

Featured image credit: Tristan via photopin (license)
]]> 0
Hackers steal and leak US military personnel data Fri, 24 Jun 2016 19:22:13 +0000 When your job involves defending the nation, you want y […]]]>

When your job involves defending the nation, you want your computer networks to be just as secure. However, following a data breach of US military personnel info, it seems some hackers have made it through their security, and have leaked the information on the dark web.

A leaked database, containing information on almost 2,500 US Army officials, has begun circulating the web with a message from the hackers. The hackers responsible are the hacktivists known as Ghost Squad, and according to BatBlue, they stole and leaked the information as part of the Operation Silence campaign.

With the data, the hackers included a message: “Fear is freedom! Subjugation is liberation! The contradiction is the truth! These are the truths of this world! Surrender to those truths, you pigs who fawn over clothing!” This may seem like an ominous, angry warning, but it is in fact taken nearly verbatim from the anime series “Kill La Kill.”

However, aside from the meaningless anime reference, it also included a message stating: “We are releasing Military officials of your government from credit card information, to names and phone numbers [and] emails. A total of over 5,000 United States military personnel will fall victim to this attack, we are not afraid of your empire, your new home will fall. Expect chaos.”

As the message claimed, the leaked database does include full names, phone numbers, email and home addresses, birth dates, and credit card information. Hackread notes that the information was first uploaded to an onion site in the dark net, and later posted throughout social networks and Pastebin.

There has not yet been a response from the US government, but given the information that was stolen, the officials affected should cancel their credit cards and enroll in identity protection services as soon as possible.

Photo by stuart.childs

]]> 0
Freshdesk enters core CRM market targeting SMB sales pros Fri, 24 Jun 2016 18:08:47 +0000 Over the past five years, Freshdesk Inc. has built a ba […]]]>

Over the past five years, Freshdesk Inc. has built a base of more than 80,000 paid and freemium customers – and a market capitalization of more than $500 million – with a focus on customer relationship management (CRM) software for support organizations. Now it’s going after the sales professionals who make up the bulk of the market.

The company’s new Freshsales software-as-a-service (SaaS) application is initially targeted at small and midsize businesses. It combines integrated customer and prospect management tracking with, email/phone integration, user behavior tracking, automated lead scoring, a visual sales pipeline and reporting. The company said Freshsales was built initially for its own internal use after the unnamed commercial CRM system it was using had turned into “an expensive manual dumping ground for data,” said Girish Mathrubootham, founder and chief executive officer.

The initial release is targeted at growing companies that run e-commerce sites or other services in which rapid turnaround is required. “Today, the moment I launch my product or service I’m inviting the world to knock down my doors,” said Freshdesk President Dilawar Syed. “You need an intelligent system to help you prioritize.”

While the initial edition lacks the social media integration and some of the marketing automation functions of high-end enterprise CRM systems, it has a collection of features that surround the guide the sales process. For example, built-in phone and email integration enables sales teams to automatically capture and track all communication with customers without switching between keyboard and handset.

User behavior tracking can be integrated with a customer’s existing website to track visitor actions and alert salespeople to opportunities. Alerts can be set to tip off sales pros if a visitor performs a specified set of actions, such as watching a demonstration video or downloading a free trial. “You’ll be able to see how engaged prospects are with your product right now,” Syed said. “The traditional approach was about pipeline and lead tracking. This is about engaging with customers.”

Behavioral tracking work with an embedded code snippet that the customer installs on its website or server to allow for Freshsales to track events. The product also provides real-time alerts on email opens, link clicks and new emails. Email campaign tracking evaluates overall performance and behavior-based segmentation groups prospects according to aggregated data.

Integrated lead scoring presents sales teams with a prioritized list of leads, ranked according to customer-defined criteria. The product doesn’t provide full marketing automation functionality but is a “basic level of tracking” upon which the company intends to build, Syed said. Mobile apps for iOS and Android platforms are also available.

Freshsales is the fourth product for Freshdesk, which was founded in Chennai, India in 2010. Others are Freshdesk multi-modal customer support; Freshservice cloud-based service management; and, an in-app support and engagement platform for mobile-first businesses. The company has raised $95 million in six funding rounds.

Freshsales is sold on a freemium subscription with a free version for installations of up to 10 users and more-functional versions priced at $12, $25 and $49 per user per month.

]]> 0
Facebook’s latest open-source tool will dramatically speed up AI projects Fri, 24 Jun 2016 17:58:49 +0000 Since becoming actively involved with the artificial in […]]]>

Since becoming actively involved with the artificial intelligence ecosystem in early 2015, Facebook Inc. has made numerous contributions ranging from niche software modules to entire server blueprints. The social networking giant expanded its repertoire yet again this week by open-sourcing a toolkit called Torchnet that provides building blocks for deep learning projects.

As the name implies, it’s designed for use with Torch, a popular AI development framework that has been adopted by several of Facebook’s engineering teams. Torchnet’s main selling point is a set of five programming abstractions meant to common tasks involved in implementing deep learning functionality. One module provides logic for training models and testing their accuracy, while another helps assess the results. The remaining components are focused on more mundane activities like managing the large volumes of data that are required to carry out such experiments.

Torchnet can dramatically reduce the amount of new code that has to be produced for a deep learning project, which saves valuable time and resources. As a result, organizations developing software using Torch will be able to reduce delivery times while enabling their engineers to move onto other tasks quickler, speeding up operations all around. The functionality should help make Facebook much more competitive against the other tech giants that are working to expand their presence in the artificial intelligence world.

One of the company’s biggest rivals in this space is Alphabet Inc., which last year released an open-source Torch competitor called TensorFlow that is starting to make serious inroads. Meanwhile, Microsoft Corp. and Amazon Inc. are also offering free AI development tools in a bid to replicate the search giant’s success. The web-scale crowd can be expected to continue churning out new technologies as interest in deep learning increases among enterprises.

Image via Pixabay
]]> 0
Hadoop and beyond: A conversation with Hortonworks CEO Rob Bearden Fri, 24 Jun 2016 16:28:21 +0000 As one of the first big data companies to go public, Ho […]]]>

As one of the first big data companies to go public, Hortonworks Inc. has been a natural target for both competitors and investors — never more than now.

The Santa Clara (Calif.) company sells subscriptions and services for its Hortonworks Data Platform, which is built upon the open source software Apache Hadoop for storing, processing and analyzing huge amounts of data.

Since its spinoff from Yahoo in 2011 and its initial public offering of stock in December 2014, Hortonworks has seen increased competition from larger companies such as Hadoop rival Cloudera Inc. and upstarts such as MapR and Databricks Inc. that are embracing Spark, a newer data processing engine, which Hortonworks also supports. Not least, Amazon Web Services and Oracle are getting into the big data game.

At the same time, amid uncertainties over how fast more companies are willing to start using Hadoop, investors aren’t as enthusiastic as they used to be. The five-year-old company’s shares plummeted after it announced plans in January for a $100 million secondary stock offering because investors questioned why it needed to raise so much money not long after its IPO, especially at a depressed share price. Shares are down 45 percent from the start of the year.

Still, Hortonworks continues to grow at a breakneck pace, with first-quarter revenues rising 85 percent, to $41.3 million. Subscription billings, which comprise 80 percent of sales, were up 122 percent. And Hortonworks reiterated a forecast that it would turn cash-flow positive in the fourth quarter.

Hortonworks Chief Executive Rob Bearden (pictured above) will talk up the company’s goal to become the leading company that helps companies manage all their data in one place when he keynotes the Hadoop Summit his company and Yahoo are hosting June 28-30 in San Jose, Calif. (* Disclosure below.)

In an interview with SiliconANGLE, Bearden described Hortonworks’ increasingly expansive corporate strategy, how the company aims to keep up with new big data technologies and why being a public company provides an edge over competitors. This is an edited version of the interview. (And you can view another interview with Bearden by SiliconANGLE Media co-CEO John Furrier, and in the linked YouTube video below.)


Q: What’s the megatrend you’re betting on here, and how is Hortonworks trying to address the opportunity?

A: We are focused on being able to bring all data under management. That begins with the data from the point of origin, like a sensor or a clickstream or even a video, and bringing that under management, engaging with that data while it’s in motion, and processing it to make decisions. That can transform customers’ business models from being reactive to their customer post-transaction to being more interactive with their customers and their supply chain pre-transaction.

Q: To what extent are companies able to capture all that data, which they used to have to throw away because storage was relatively more expensive, in a useful way and make sense of it?

A: That’s the power of Hadoop. Even five years ago, it wasn’t pragmatic to bring that volume of data under management of traditional data platforms. As Hadoop emerged as an enterprise-viable data platform, you could now bring that data under management for a fraction of the cost of managing and processing it.

Now many new use cases emerge because of the power of Hadoop to be predictive about what our customers are doing. We can have a common view of all our relationships with customers.

Q: Are customers trying to automate existing processes with this technology, or are they finding fundamentally new things they can do as a result of having control over all this data?

A: Both. A simple use case is just mass storage and fast retrieval against a very large data set at probably a tenth of the price point of traditional technologies. Much higher-value uses cases quickly emerged, like being able to have a 360-degree view of all of their data. With traditional customer relationship platforms, there’s one view of the customer in the dot-com or procurement platform. There’s another in the retail system. There’s another in the inventory system.

By leveraging Hadoop, you can bring all of those customer relationship views onto a central golden record about that customer, and be able to create a better customer experience, sell them more, faster and at a better margin.

Q: That sounds like a typical retail situation. Any examples in other industries?

A: Take the oil and gas industry. They never had the ability to understand what was happening on the rig in real time and be able to compare that against the common standards for drilling volumes, patterns and chemical makeups they’re trying to accomplish with each of the crude varieties. Today, they can make a real-time decision based on their libraries of goal sets what they want to do on that rig at the very instant they start pumping that crude and determine if they need to do maintenance later or in real time, to optimize the uptime and the pumping volumes. These companies can see from $100 million to $500 million a year in value with that real-time visibility on all that data all at once.

I could do the same with automotive, healthcare, financial services. Being able to bring all of the data under management from point of origination to point of rest transforms virtually every industry and allows them to evolve into new business models.

Q: How do you contend with customers’ organizational resistance to new business models?

A: This is one of those megatrends we’re betting on: Data becomes the new oil. They realize if they don’t embrace it, they die. Or if they don’t die, they certainly get left behind.

Q: Where do you see the biggest opportunity as a company in open source software?

A: At the core, it’s around continuing to innovate the tech, but also create value and enable these new models, and enable the enterprise to get business value back very quickly. That continues to expand the subscription relationships that we have. Our net expansion rate with customers is over 150 percent last quarter.

Q: How do you plan to move from big losses today to cash-flow positive by the fourth quarter, as you’ve promised?

A: We have beaten on every metric for the last six quarters, including moving cash burn down. We’re very comfortable in the execution against that. In 2015, we doubled our customer base.

Q: So it’s going to be a steady progression to profitability rather than, say, taking your foot off the marketing gas or other expenses? Is there a tipping point as you scale up?

A: We’ve had a very steady progression of growth in the last seven quarters. We’re going to continue to make investments going forward, and that will take us into EBITDA break-even. That’s what’s so great about the subscription model. You continue to create value and generate leverage.

Q: Given the fast-changing market and new competitors continuing to stream in, growth remains important to build a moat, right?

A: Without question. The great news about this space is that data is doubling every year across the enterprise, so our market opportunity continues to expand. At the end of last year, we expanded our strategy to bring the entire data stream from the point of origination of that data to real-time processing and engagement as events and conditions happen.

Q: A lot of people look to Red Hat as the iconic open source company to go public, and it has had its ups and downs, as has Hortonworks. Is Hortonworks trying to be the Red Hat for Hadoop?

A: There are many similarities between their model and ours–certainly open source, subscription-based revenue model. So sure, I’d gladly say we’re the Red Hat of Hadoop.

Q: Is the prospect of an IPO by Cloudera or other cloud software companies a challenge in terms of customer perceptions?

A: I can’t speak to where they are in their IPO objectives. But from our perspective, it’s been very good in customer situations to be a public company. When they start looking at creating and embracing the next-generation data platform, they want the transparency of a company that operates in a public market versus hearsay and rhetoric of a private company.

Beyond Hadoop

Q: To some, Hadoop feels like old news. Is it a challenge to convince new customers who might think Hortonworks is all about Hadoop at a time when other technologies such as Spark, Storm and Flink are driving the market perceptions out there?

A: We’re a huge supporter of Spark. We think it has an incredibly important and valuable place in the data architecture overall. Our architecture of Hadoop on the bottom brings all the data together on a central architecture, and above [allows customers] to simultaneously bring all of those different application types to execute over that central data architecture.

In the case of Spark, it does what it does extraordinarily well. But in certain environments it’s going to be some percentage of the data set and the workloads, and in other environments it’ll play less or more. We want to bring the data to Spark, not just let Spark emerge as another siloed data workload.

Q: How do you keep up with the fast pace of change in open source software that has produced adjacent or competing data technologies?

A: If we don’t have a meaningful and material role as a committer [to open source software projects], then we can’t innovate on the core architecture platform. With the core architecture that we’re enabling, we will participate in the projects and make them enterprise-viable and bring them into the platform, or we don’t have them as part of the platform.

Q: Many customers still view Hadoop as hard to use and expensive to implement. To what extent do you need to deal with that view?

A: There’s a tremendous evolution that’s happened. The first wave of it was becoming a truly enterprise-capable data platform, and after that came better enterprise services. The leg that’s forming now is ease of use, not only for the user to interoperate with the data, get data into it and get applications leveraging it.

That means being able to operate simultaneously in cloud, on-premise and hybrid environments and to have all the tooling that moves those workloads around transparently, with common security and governance models. We’ve been at that aggressively through our partnership with Microsoft the last three years.

Q: Will one distribution of Hadoop, or just a few, prevail going forward, or is fragmentation going to be a way of life for a while?

A: This is a massive market. Just look at the data growth that’s happening in the enterprise. It’s doubling every year, and 80 percent of that data growth is coming from data sets that were falling on the ground for lack of a viable platform. That opportunity opens up for multiple platforms to be successful.

Look back at ERP [enterprise resource planning]; there were certain providers that did well across certain industries or applications. Certain relational databases did certain kinds of things very well versus others. Given the size of this opportunity, that same dynamic will emerge.

Packaged big data applications

Q: Why aren’t there many packaged big data applications? Is that the way it’s going to be for the foreseeable future?

A: It is forming right now, actually. It’s a perfect indicator of the maturity of Hadoop, reaching a critical mass of adoption as part of the core data architecture strategy, that now the modern data applications can start emerging. There’s a big enough market to build great companies on. We saw that start to accelerate about this time last year.

Q: What examples would you point to?

A: Internet of Things applications are leveraging Hadoop. There are analytics platforms that are solely Hadoop-based. When you look at the cloud platforms that are now providing big data services, all of their traditional analytics natively support Hadoop. You see the connected car platform.

Q: So many big data technologies have been spun out into open source by tech companies such as Yahoo, LinkedIn and others that are not traditional software companies. What’s the upshot of that innovation model for either those companies, which are also users of these technologies, or for other customers?

A: It’s a significant trend. The new generation of companies tends to be either companies that have had to solve very hard problems to scale and they did it with their intellectual capital, or they are large companies that are hitting a scale problem, like Facebook, Twitter, Google, Yahoo, LinkedIn, even the federal government such as the National Security Agency. There’s another two dozen that are out there.

They realize the best place to innovate that tech is actually to put it in open source and get a community to form around it, with a core team that’s focused on guiding the roadmap, doing core innovation and taking it through to becoming an enterprise product. This is absolutely becoming the new model of software.

Q: That suggests a whole new structure for the software industry, doesn’t it?

A: Absolutely. It’s as transformative to the software industry as the cloud has been for the traditional hardware and storage industry.

* Disclosure: TheCUBE, owned by the same company as, will be the paid media partner at Hadoop Summit. This interview was conducted independently and neither Hortonworks nor other summit sponsors have editorial influence on SiliconANGLE content.

]]> 0
Is open cloud architecture the future of Hybrid Cloud? | #CrowdChat Fri, 24 Jun 2016 14:30:53 +0000 Is open cloud  the next generation for hybrid IT archit […]]]>

Is open cloud  the next generation for hybrid IT architecture? IBM’s Open Cloud Architecture (OCA) Summit, which took place in Seattle on June 22, brought together a number of industry leaders to discuss “Learning to Love Open Hybrid Cloud” – and learning to avoid some of the risks involved in transitioning to open source.

A CrowdChat preview before the event offered an overview of current discussion points in the field, as well as advice for those just getting started with open hybrid cloud initiatives.

First steps

In line with the OCA Summit theme, the first question pondered was: “Where should companies start in their journey?”

The first step is clarity, said RackN, Inc. founder and CEO Rob Hirschfeld, and “understanding that ‘open’ means a lot of things. AWS has open APIs but not open code.”

Mark Thiele, chief strategy officer of Apcera, Inc., said that clarity of need and expectations is also important at the beginning of the open-hybrid-cloud journey. “We need to evaluate current need against expected need to avoid painting ourselves into a corner,” he said. “Reviewing clouds isn’t helpful as much as reviewing the framework/platform approach for using those clouds.”

Jason McGee, IBM fellow and VP and CTO of the IBM Cloud Platform, offered some practical advice: “Empower your developers and business leaders. They know your systems and needs better than anyone. Introduce them to tools like Bluemix [IBM’s hybrid cloud development platform] that enable them to experiment,” he said.

And the Cloud Native Computing Foundation’s newly appoined Chief Operating Officer Chris Aniszczyk echoed a number of earlier comments about the importance of involvement in the open-source community. “Start by participating in relevant open-source communities … and engage in open-source foundations that are serving as the bedrock for hybrid cloud technologies,” he said. After all, this collaborative exchange and cooperation is at the core of Open’s promise.

Q5 Learn to Love

Portability and governance

As the field grows, questions of governance and the implementation of new technology like containers grows. The most popular question asked during the CrowdChat was: “The portability that containers allow is helping to drive hybrid cloud adoption. But could more open governance increase the benefit? Why/how?” Responses were somewhat divided on this big-picture issue.

Jason McGee suggested that governance streamlines problem-solving. “We sometimes solve the same problem in many different ways,” he said. “If we could agree to solve orchestration, as an example, in one way, we could move on as an industry to the next issue.”

Stormy Peters, VP of Developer Relations at the Cloud Foundry Foundation, said that having more options makes innovation easier for developers, but added that solutions for customers were top priority. “We need to solve, in many ways to innovate, but customers need portability and interoperability to move quickly and solve big problems,” she said.

Others, like cloud consultant Antonio Carlos Pina, said that it really depends on the user, so a case-by-case approach would be helpful. “A startup will required speed and almost no process, while the enterprise will require process, governance and …. tons of reports,” he said.

Several CrowdChatters agreed that containers alone are not enough to solve major problems. Jason McGee commented that companies “need the full lifecycle of tools around containers to make it real.” And Mark Thiele said that it takes more than moving a workload to achieve true portability across the cloud.

“Right now deploying containers in any real way is a snowflake activity. … Security, trust, audit, [are] all key to appropriate governance,” Thiele said. He concluded: “It’s also about time to value. We shouldn’t force one thing or the other on the community, but rather provide tools that give them real time to value w/ flexibility.”

Q1 Containers Governance

Key open technologies

The OCA Summit team turned the conversation toward new technologies. “Moving forward, which open technologies will have the most significant impact on hybrid cloud adoption?”

Contributors highlighted a number of advances. Sriram Subramanian, founder and CEO of CloudDon, said that adoption would come from a “combination of many” – “OpenStack, Cloud Foundry, Kubernetes, [and] open container initiatives.”

Jason McGee said that he believes flexibility will be a defining characteristic of successful new tech. “The technologies that support flexibility and interoperability will surface as leaders and will see the most adoption. Containers is certainly going to be one of them,” he said.

The community behind new technology also matters, Stormy Peters theorized. “Cloud Foundry will have significant impact on hybrid-cloud, multi-cloud adoption, because it has a strong, diverse, cross-industry community creating solutions to solve the world’s hardest problems,” she said.

Duncan Johnston-Watt, founder and CEO of Cloudsoft Corp., agreed. “The open technologies that will have the most impact are those that are open to adjusting their own footprint and aligning themselves with similar initiatives,” he said. “Real-world example: @cloudfoundry, @cloudnativefdn dialog IMO.”

But some contributors still saw barriers to increased adoption. Rob Hirschfeld said the technology is “still maturing” and that “open Software-Defined Networking (SDN) is required for hybrid to advance.” He added that he thinks “business needs, infrastructure choice and competition drive hybrid more than the tech.”

Q2 New Tech Open Cloud Adoption

Open cloud: The next generation of hybrid?

One of the most contentious discussions centered on the role of open tech in the future of cloud, a helpful question for those considering making the switch. “Recently, IBM Cloud SVP Robert LeBlanc called open technology ‘the foundation of the next generation of cloud.’ What evidence are you seeing to support this (or not)?”

Most agreed that it’s just too soon to call the cloud tech game. Mark Thiele said that “it looks good for open source, and I’m certainly a supporter. But we’re still in the second inning of the cloud game. There will be risks and failures that keep the need for proprietary and open.” And those proprietary advances, by definition, won’t be seen by the community at large until later in the game, as Sriram Subramanian pointed out. Open technology seems to be winning at the moment, but only “if you ignore AWS/Azure or if you ignore public cloud,” he said. But “while open tech is continuing cloud innovations, one cannot ignore the innovations major players are making behind walls.”

On the other hand, Ruben Orduz, technical advocate at Blue Box, an IBM Company, reminded the CrowdChat participants: “The vast majority of tools and tooling powering the cloud today are open source.” And Antonio Carlos Pina added, “Crowd development is surpassing proprietary models in new features, releases, etc. It’s the power of many. And it’s exponential, impossible to curb.”

Mark Thiele summed up the overriding sentiment by pointing back to the ultimate goals of cloud technology. “There is no guarantee that what we’re seeing will be the future,” he said. “What’s more important is that we continue toward the notion of providing customers [with] choice that can drive value.”

Q3 Open Tech Next Generation

Is there such a thing as too much cloud?

If the ultimate goal is to drive value, then it’s vital that companies ensure hybrid cloud tech is appropriate to each use-case it’s applied to. When is it not appropriate? Opinions varied.

Mark Thiele said he doesn’t think there is necessarily an “inappropriate” use of hybrid cloud. “It’s all about business case and appropriate solution selection,” he said. “Need to bust assumptions about what is real and what isn’t.” An “inappropriate” use would be “more associated with inadequate [business] case or design.”

Ruben Orduz emphasized the need to have “requirements drive the solution” and not put the cart before the horse. “Trying to prescribe a cloud architecture without looking at requirements is a haphazard” and could theoretically lead to inappropriate applications, he said.

IBM’s VP of Cloud Architecture and Technology, Dr. Angel Diaz, added that “the application (workload) determines how, when and where to leverage hybrid cloud.”

Some newer applications, like cloud-native apps that use new data sources might not need to involve hybrid tech at all, said Jason McGee. But even when the requirements and business cases are in place, Antonio Carlos Pina said that “there’s the customer personal taste” to consider. “After all customer is king,” he said.

Q4 Appropriate Uses

Photo by George Thomas

Read the full chat below:

]]> 0
After the hype: Where containers make sense for IT organizations Fri, 24 Jun 2016 14:13:15 +0000 Container software and its related technologies are on […]]]>

Container software and its related technologies are on fire, winning the hearts and minds of thousands of developers and catching the attention of hundreds of enterprises, as evidenced by the huge number of attendees at this week’s DockerCon 2016 event.

The big tech companies are going all in. Google, IBM, Microsoft and many others were out in full force at DockerCon, scrambling to demonstrate how they’re investing in and supporting containers. Recent surveys indicate that container adoption is surging, with legions of users reporting they’re ready to take the next step and move from testing to production. Such is the popularity of containers that SiliconANGLE founder and theCUBE host John Furrier was prompted to proclaim that, thanks to containers, “DevOps is now mainstream.” That will change the game for those who invest in containers while causing “a world of hurt” for those who have yet to adapt, Furrier said.

What do containers do?

Although interest has only peaked in the last couple of years with the emergence of companies like Docker Inc. and CoreOS Inc., containers have actually been around since the early 2000s. They were created as a solution to the problem of how to get software to run reliably after being moved from one computing environment to another. That’s because major problems can arise when the software development environment is not identical to the production environment.

“You’re going to test using Python 2.7, and then it’s going to run on Python 3 in production and something weird will happen,” said Solomon Hykes, founder and CTO of Docker, in an interview with “Or you’ll rely on the behavior of a certain version of an SSL library and another one will be installed. You’ll run your tests on Debian and production is on Red Hat and all sorts of weird things happen.”

Containers overcome this problem by taking the entire runtime environment, which means the app itself, all of its dependencies, libraries, binaries and configuration files, and bundling it together in a single package that can run on any operating system. In this way containerization shares some similarities with virtualization, but there are also big differences. The most important is that virtual machines (VMs) include the entire operating system that the app runs on, as well as the application itself. With virtualization, a physical server running three VMs would therefore have both a hypervisor (for management) and three operating systems available.

On the other hand, “containers are lightweight and do not include a full copy of the OS, only the application and its dependencies,” said Al Hilwa, program director of application development software at International Data Corp. (IDC) As such, three containerized apps can share the same, single operating system, which means significantly less resources consumed, Hilwa explained. VMs, each with its own operating system, could be several gigabytes in size, while a container is typically just a few megabytes. As a result, many more containers can be hosted on a single server than VMs, and containerized apps can be booted up much more rapidly (seconds, as opposed to several minutes with VMs).

The building blocks of cloud native infrastructure

Wei Dang CoreOS

Wei Dang, head of product, CoreOS Inc.

Up until now, most container adoption has primarily been focused on packaging and isolating applications for easier software development and testing, explained Wei Dang, head of product at CoreOS (pictured, right). This is just the first step in a much larger transition to cloud-native architecture, in which applications are delivered as microservices in containers that run across distributed architecture.

“Cloud native infrastructure provides better security, scalability, and reliability, and it reduces operational complexity through automation,” Dang said.

Of course, containers are one of just several distributed systems components that make the wheels of cloud native infrastructure environments go round. Separate components are also needed to handle things like orchestration, networking and storage, which is why we’re hearing so much about technologies like Kubernetes, an open-source container orchestration tool built by Google that helps users to build and deploy both new and legacy applications in production.

Tools like Kubernetes play an important role in boosting container adoption, Dang explained, as they manage tasks like automating application scheduling and workload placement in clusters.

“They further simplify operations tasks,” Dang said. “Building a hybrid cloud with container infrastructure that spans on-premises and public cloud environments allows companies to quickly and easily run their applications based on business needs, not technical constraints.”

Containers or VMs: Which is best for me?

So are containers a more effective replacement for VMs? In many ways they are, but IT has become so complex that any decision to adopt containers or stick with VMs will need to take into numerous variables into account. Experts mostly agree that organizations should consider containers if they need to flexibly deploy, run, and manage applications at scale.

“Containers can increase the density of computing significantly,” said IDC’s Hilwa. That’s because container technologies were created as a less resource-intensive alternative to VMs by companies that needed to run hyperscale applications and rapidly iterate in development, he explained. Container-based infrastructure can yield significant cost reductions.

When trying to imagine the difference in how containers can scale, it can be helpful to paint a picture of how they work, stressed Holger Mueller, vice president and principal analyst at Constellation Research Inc.

“You can think of containers as building a staircase to unknown heights,” Mueller explained. In a VM environemnt, steps are one foot high in terms of resources used. With containers, they’re just an inch high. The container staircase is a lot easier to climb and “you can scale to any height”, Mueller said.

Containers scale

Holger Mueller: Containers let app developers “scale to any height”

That’s not to say containers trump virtual machines in every case. Most experts agree that VMs are more secure by virtue of their maturity and the fact that hypervisors provide less functionality than the typical Linux kernel and consequently present a smaller attack surface.

In a VM environment, “processes do not talk to the host kernel directly,” wrote Red Hat Inc. security engineer Daniel Walsh in a blog post. “They do not have any access to kernel file systems like /sys and /sys/fs, /proc/*.”

And while containers are generally a superior choice for hosting apps designed to scale, not every application needs to do so. Virtualization may be the better bet for small applications or older legacy apps.

“Sometimes you need to run a different OS on the same server,” said Rob Enderle, principal analyst at the Enderle Group. “Containers share an OS, which means they’re not always suitable. In contrast, VMs emulate hardware, which makes it possible to run a different OS instance in each one. It’s an important advantage when you need to run multiple operating systems on a single machine, or perhaps an older OS for compatibility reasons with older apps.”

Better together

Then again, there are many proponents of containers who argue that the two technologies work better when used together. Docker is one of them. The company teamed up with virtualization giant VMware Inc. at VMworld 2014 to promote the idea of running containers inside VMs, with the main advantage being that the combination addresses the inherent security isolation problem of containers.

“There is a misconception that containers are merely replacements for VMs, when they actually solve different problems,” said CoreOS’s Dang. “The real question is not ‘when should I use containers?’ but ‘when does it makes sense to use containers with or without VMs?’”.

In answer to his own question, Dang said containers can probably provide sufficient security for an organization that deploys them in its own data center. However, when running in third-party multi-tenant environments or cloud services shared by many customers, it makes sense to run containers in VMs to provide that additional hardware-based security isolation.

“We recognized the need for these different use cases when we built rkt, a container run-time that optionally executes containers as virtual machines,” Dang said.

So virtualization is unlikely to be displaced any time soon, and not just because of the security implications. Despite the hype of the last couple of years, containers are still a relatively new technology, and while systems like Kubernetes and Docker Swarm make things much easier on the management side, those tools aren’t as comprehensive as virtualization management software like VMware’s vCenter or Microsoft’s System Center. Still, IDC’s Hilwa suggested that this may not always be the case, as technologies like Kubernetes are constantly evolving.

“Without strong orchestration or PaaS [platform-as-a-service], containers will not realize their full potential,” Hilwa warned. “This is why the industry is now focused on evolving a few options in orchestration. At some point a few will reach critical mass.”

Photo Credits: Hong Kong Photographic via Compfight cc
]]> 0
Using 3,000-year-old techniques to negotiate today’s tech relationships | #GuestOfTheWeek Fri, 24 Jun 2016 14:00:30 +0000 This week’s SiliconANGLE Guest of the Week segment shou […]]]>

This week’s SiliconANGLE Guest of the Week segment should not be missed by anyone in the technology industry who is in an executive decision-making or sales role. This in-depth interview covers all the angles of negotiation and offers great advice for those in the tech industry.

Deepak Malhotra is a Harvard Business School professor and the author of a book called Negotiating the Impossible: How to Break Deadlocks and Resolve Ugly Conflicts (without Money or Muscle), in which he offers advice on conflict resolution and negotiations. He also sits on the Board of Advisors at Nutanix, Inc., and as you will find out, there is a very good reason he is in this position.

At the Nutanix .NEXT Conference 2016 at the Wynn in Las Vegas, Malhotra joined Dave Vellante (@dvellante) and Stu Miniman (@stu), cohosts on theCUBE, from the SiliconANGLE Media team, to discuss his role with Nutanix and offer valuable advice about negotiating deals in the technology industry.

Education not negotiation

Vellante began the interview questioning why a negotiations expert would need to attend the Nutanix .NEXT Conference and talk to customers? Malhotra explained to Vellante that it’s not about sales, but about bringing perspective.

“There’s a lot of things going on there. On the one hand it shows a little bit about the Nutanix perspective, that it isn’t a zero-sum game. It’s not, ‘We’re going to train the Nutanix people so they can get an advantage over customers.’ I think the company is focused on creating as much value as possible for the end user. When you take that mindset, it actually makes sense to be inclusive and bring everybody in the ecosystem into the room … it’s [about sharing] your ideas with everybody. And I think that’s a really good sign when a company is willing to do that.

“The second thing is … a lot of the people who are customers who are thinking about moving in the direction of Nutanix or have bought into the idea, still say they need to sell it internally. They still negotiate internally. … So why not educate them about some of the things they might not have thought about yet.”

A history lesson in negotiation

Vellante asked about the opening of Malhotra’s book and the story about the 3,000-year-old Treaty of Kadesh. Malhotra explained why the oldest known treaty is still relevant today.

“There is a lesson embedded in the story of the Treaty of Kadesh that I think is as relevant today in negotiations of just about every kind in the business world and outside that’s worth telling. The basic story goes as follows: The Treaty of Kadesh … was between the Egyptians and the Hittites, and these two parties were at war. And at some point they must have decided, ‘Enough of this. We need to put an end to this … we need to find a way to resolve this conflict.’

“What often happens in these situations is nobody wants to look weak; nobody wants to be the one asking for peace because that may just embolden the other side … somehow they overcome these hesitations [and] they reach this agreement. Now what’s really interesting is that we actually have access to both language version of the Treaty. So we have the Akkadian (Hittites) and the Hieroglyphics (Egyptian) … but there is one difference when you compare the two peace treaties … in the Egyptian version, it says that it was the Hittites who came asking for peace, and in the Hittites version, it says the Egyptians came asking for peace.

“What it goes to show is that no matter how far back you go … this need for every side to declare victory at the end of the negotiation, at the end of a conflict, is as old as human beings themselves. When you understand that, I think it changes the way in which you try to negotiate these deals. Sometimes it’s not the substance of the deal … but there might be other reasons they can’t say yes. For example, they might lose face, or they may look bad. And when you recognize that, I think you come at it a different way.”

Sitting at the table

Miniman wanted to know how to change people in the tech industry who are notoriously unwilling to change. Malhotra responded by talking about what to do when you have a seat at the table.

“You have to get the economics right, and you have to get the psychology right. The economics … you have to have a good product, it needs to be priced appropriately and you need to be bringing value to the table. … The problem is you may have the right product … but there might be these psychological hurdles that you need to get over. A prominent one [hurdle] …when no one else is doing it and nobody feels the urgency to do it … and you don’t have a long list of customers that you can use to prove to people that this is the way you should be going. There is always a risk that somebody is going to take a bet on this and something goes wrong; you know, it’s sort of nobody lost their job buying IBM kind of mentality.

“So as a negotiator in a company starting out in the early stages, especially in technology where you’re doing something a little disruptive, you need to start thinking about how do we get them over that, how do we get them to start understanding.

“The most common thing that happens when you walk into the room with a new disruptive technology is that the person on the other side says, ‘Are you crazy? You’re charging 10 times what your competitor is charging. You’re sitting here asking me to pay X? If I do nothing I have to pay zero.’ That is a very common response salespeople get when they are in an environment like this, and one of the things I advise people to do in that situation … is to not make the worst mistake a salesperson can make. … The worst mistake is to apologize for the price being too high. … The moment you go in that direction, you’re giving a license to haggle with you, because what you’re telling them is even you don’t think the price is appropriate.

“A better response in a situation like this is for the sales person to say, ‘Listen, I think the question you are asking me is, how is it that despite our price being 10x of what some other people are charging … what kind of value must we be bringing to the table. … Now I’m happy to talk about that value because, at the end of the day, we all know that nobody’s going to pay more for something than its worth.’ What you are doing there is shifting the conversation from price to value.”

It’s not about what you get, but how you feel

Vellante posed the question of whether it is more important to get the best deal or to find common ground. Malhotra rephrased the question and offered up the best result.

“In my experience, it is possible to get a great deal and a great relationship; it’s also possible to get neither. So what you are trying to do is optimize on both. Very often we assume that it’s a zero-sum game enough that the only way for me to get a good deal is for me to sacrifice a relationship in some way. That’s not how it works in a richer context … in more complicated deal scenarios, because what people evaluate when they walk away from the table isn’t just ‘Did I get a good economic deal?’ When people think back they say, ‘Do I want to work with this person again? Do I like this person? Did I get a good deal?’

“Often what they are thinking about is not so much of the substance that they got, what’s in the agreement, but the process they went through. … If you navigate the process more effectively, you can often get to a point where you can get the deal that you think is right for you and you get a relationship that both sides can walk away feeling good about.”

Watch the full interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the Nutanix. NEXT 2016

Photo by SiliconANGLE
]]> 0
Public cloud giants get security nod from FedRAMP Fri, 24 Jun 2016 12:30:41 +0000 Amazon Web Services, Microsoft Azure and the lesser kno […]]]>

Amazon Web Services, Microsoft Azure and the lesser known CSRA Inc. have landed a key authorization from the U.S. government that gives federal agencies permission to use their cloud-computing services to store highly sensitive data.

The three companies’ clouds (AWS GovCloud, Azure GovCloud and CSRA’s ARC-P IaaS) have all been granted provisional authority that allows them to provide services under the highest baseline of the government’s tough Federal Risk and Authorization Management Program (FedRAMP) standards for cloud computing services.

More than 400 security controls are present in the FedRAMP’s high baseline. Now that AWS, Microsoft and CRSA have all been granted approval, their clouds can be used for the most sensitive of workloads, which includes storing citizens’ personal information.

The award is notable for all three companies because the U.S. government spends around half of its $80 billion annual IT budget on systems covered by that high baseline, said FedRAMP in a blog post. “That’s huge!”, the organization helpfully pointed out.

“These security requirements will be used to protect some of the government’s most sensitive, unclassified data in cloud computing environments,” FedRAMP added. “This release allows agencies to use cloud environments for high-impact data, including data that involves the protection of life and financial ruin.”

With AWS’s GovCloud now approved for FedRAMP’s high baseline, government agencies have a much “simplified path” to transitioning their most sensitive data to the public cloud, noted Teresa Carlson, vice president for the worldwide public sector at AWS, in a statement.

FedRAMP said its high baseline standards are aligned with the those of the U.S. National Institute of Standards and Technology, which classifies data as “high risk” if a security breach would lead to a severe impact on the assets, operations or people at an organization.

Image credit: tpsdave via pixabay
]]> 0
Data Center survey shows enterprises waste little time in shifting to the cloud Fri, 24 Jun 2016 12:05:38 +0000 The cloud is absolutely not a fad. Instead, enterprises […]]]>

The cloud is absolutely not a fad. Instead, enterprises are falling over themselves to have public cloud providers reduce the burden of managing their IT, according to the results of a new study by the Uptime Institute this week.

Just over 50 percent of IT professionals and data center operators quizzed in the newly published Uptime Institute 2016 Data Center Industry Survey revealed they expect most of their IT workloads to be shifted to the cloud or colocation facilities, with 70 percent of them saying it would happen by 2020, and 23 percent indicating they expect the transition to happen as early as next year.

“The shift is occurring, and our findings show an industry in a state of flux. We saw the trends lining up beginning with our 2013 survey, noting that enterprise IT teams were not effectively communicating data center cost and performance metrics to their C-level executives,” remarked Matt Stansberry, director of Content and Publications for the Uptime Institute, in a statement.

Back in 2013, only 42 percent of enterprise data center operators indicated they reported cost and performance information to the C-suite, compared to more than 70 percent for third-party providers. According to Stansberry, this is because many enterprise IT teams tend to emulate counterparts at cloud providers.

“The business demand for agility and cost transparency has driven workloads to the public cloud. Our counsel to data center and IT professionals is to become more effective at articulating and showcasing their value to the business,” Stansberry added.

The survey showed that finances are one of the biggest driving forces in pushing enterprises to the cloud. Some 50 percent of respondents reported that their IT budgets over the past five years have either been tightened or remained flat. And just 10 percent said their IT budgets were “significantly higher” than they were five years ago, with the rest reporting only modest increases.

Another benefit from going to the cloud seems to be a smaller server footprint. Some 55 percent of respondents said their server footprints have remained flat or shrunk in the last five years.

The automation of routine operations is often justified as a way to free up DevOps teams to get distributed enterprise applications out the door faster. Emerging production technologies like application containers are helping to speed that process while presenting new challenges to IT operators who must closely monitor a new generation of hyper-scale IT infrastructure.

The study also found that more than half of enterprises moving workloads to colocation facilities were either “satisfied” or “very satisfied” with their chosen provider. Even so, the Uptime Institute concluded from its survey that IT outsourcing isn’t always the panacea it’s made out to be. Some 40 percent of those polled said they were paying more than expected for colocation services, and around a third said they had experienced outages at their colocation site. Unfortunately, 60 percent of those that did experience outages said that the penalty clauses in their service level agreements were unable to offset the losses incurred by their businesses.

“Enterprise organizations paying a premium for a third party to deliver datacenter capacity should hold service providers to higher standards,” the survey recommended. “There is room for improvement in vetting, negotiating and managing those relationships.”

Photo Credit: rodocody_quispezela via Compfight cc
]]> 0