Before Elon Musk made history by launching his Tesla Roadster into space earlier this month, the entrepreneur's electric-car company was doing some out-of-this-world engineering right here on Earth.
The Tesla Model S, for example, not only doesn't require a key, because it's electric, but there's also no need to push a power button. "You get in it and just start driving," said Gartner analyst Mindy Cancila. She and fellow Gartner analyst Danny Brian spoke during the kickoff of the market research outfit's Catalyst convention for IT architects in San Diego in August.
And then there's the space-age dashboard: There are no needles measuring cruise speed or battery-power level (again, it's electric). It's all software, which Tesla updates "over the wire."
"This vehicle collects data on everything you can imagine -- from the vehicle performance to the road conditions to traffic trends and conditions -- and then they use all that data to improve the vehicle constantly," Brian said.
For Cancila and Brian, the $68,000 luxury auto exemplified a principle IT teams need to build their systems for in an age when the activity of nearly all things -- whether built or born -- can be measured, tracked and analyzed: precision.
"Historically, we planned for peak capacity," Cancila said. "We were really imprecise. We overbought because we didn't know what our budgets might look like in the future. And we didn't know if we would be able to meet the capacity needs."
But making IT systems more precise -- by collecting and processing and making sense of data about IT systems' performance -- can lead to levels of efficiency and cost-savings that were never before possible, Cancila and Brian stressed. In late summer 2017, that was sound advice. In 2018, with artificial intelligence, machine learning and automation advancing fast and poised to reshape business and the wider world, building a precise, more efficient IT is more than advice. It's a mandate.
Leaner, meaner cloud
Alec Chattaway is following it. He's the director of cloud infrastructure operations at data management software provider Informatica. The Redwood City, Calif., company, founded in 1993, has long helped companies integrate and analyze data in on-premises data centers. Today, the company specializes in cloud data integration and is transitioning its own IT to the cloud, with much of its infrastructure on Amazon Web Services. It's also increasing its presence on Microsoft Azure and is mulling Google Cloud Platform for future workloads. Chattaway is in charge of all the company's cloud infrastructure and operations.
As vaunted as cloud is for its overhead-cutting powers, "We do spend a lot of money," Chattaway said, and cost is a real concern. To that end, he's resolved to increase the efficiency of the company's cloud infrastructure by "probably 50%."
"We're past the engineering stage to the point where we can start looking at efficiency," he said.
Chattaway is creating a more efficient IT organization by using predictive analytics to slice and dice historical and real-time data about computing resources such as processing, memory and storage. The analysis draws a predictable timeline of resource usage, allowing the cloud infrastructure team at Informatica to know "with some certainty when your load will come in" -- in other words, when they're going to use how much computing power. That way, they can precisely provision resources as they're needed and keep cloud costs to a minimum.
A curb on overbuying
Predictive analytics will also enable Chattaway to all but erase the need to overprovision -- essentially, use more computing resources than is needed. It's a common cloud practice: Companies buy extra computing power from a cloud provider and overprovision virtual machines when they don't know how much a website, say, or application will need. To admins, it's safer than erring on the low side: More visitors to a site than anticipated can cause it to crash, irritate customers and ultimately damage business.
But overprovisioning is woefully inefficient. Speaking on the long drive from California to Las Vegas, Chattaway gave the analogy of trying to pass an 18-wheeler in his Toyota Prius.
"It would be really nice to have a V-8 nitrous-injected engine to be able to get past this truck in 150 yards," he said. But buying racecar machinery for an environment-friendly family car "just in case I need to overtake a truck," most would agree, is a lavish waste of money. So is paying a cloud provider for computing power that won't get used.
But unforeseen, on-demand events happen, though they rarely have the poetic machismo of a Prius taking on a semi. At Informatica, it's typically "an onboarding of customers," Chattaway said, so more customers starting out with the company's integration tool than expected. For example, he can predict that 50 new customers will sign on, based on the time of year and other factors, and then provision for 50. Then 250 sign on.
"Rather than be predictive, it's more -- I wasn't expecting it, therefore, I now need to literally spin up stuff to just meet demand," he said.
To prepare for a sudden onslaught, Chattaway needs to scale out -- or add more machines for more virtually unlimited processing power. In a physical data center, that would involve finding spare server racks or making space for new servers; the operation might take weeks.
"I would like to make that 15 minutes," Chattaway said, while adding "two or three zeros to the current resource."
Getting resourceful in the cloud
He's getting there. In a cloud computing environment, the cloud providers do the scaling and they're quick about it. To scale out fast to meet unexpected bursts in demand -- and more important, back again, so Informatica doesn't pay for juice it doesn't need -- Chattaway is looking to replace many of his virtual machines with containers, the virtualization technology that breaks down applications into discrete parts. Containers allow granular control over resources, giving his team the ability to dole out specific amounts of computing power, memory and storage and scale out in seconds versus several minutes on virtual machines.
Also part of shaping a more efficient IT are efforts to "trim the fat," Chattaway said -- turning off machines that don't need to be running on the weekend, for example; billing internal consumers of cloud infrastructure, so department heads can see what engineers are spending; and evaluating whether to use Informatica's own machine learning products to improve cloud infrastructure performance.
"But again, we have to weigh that up against resource," Chattaway said. "If that's going to cost 15 engineers, and we can buy something off the shelf -- the cost of three engineers -- then why wouldn't we?"
Back in San Diego in 2017, Gartner's Brian rattled off questions attendees should ask themselves while reviewing their IT architectures: Are you discovering new and surprising uses for the data you're collecting? Are you allowing that data to change the way you operate? Are you solving the right problems?
As 2018 progresses -- and Musk's Tesla cruises toward Mars -- anticipate the future, act in the present, enable a more efficient IT and answer with a resounding yes.