Ever since the first hunks of Big Iron hit the streets, we’ve been struggling (and failing) to keep up with the demand for software applications.
Now it’s getting harder. Not only do we have to satisfy an insatiable appetite for new apps and services, but the nature of the business palate is changing. Business now demands a continuous flow of software innovation over back-office application support – or using the haute cuisine analogy – they want IT to serve a classy degustation menu versus just warming up a ready-made TV dinner.
All this bodes well for the fast iterative style agile, with fully autonomous development teams empowered to increase the rate of software deployments.
Sound great, but what does this mean for IT Ops?
Many would argue that IT Ops is a completely defunct or redundant function. Even credible analysts and thought leaders have touted NoOps (No IT Operations) as a viable option. Some even suggesting that it accelerates DevOps collaboration since by removing the Ops function you remove all friction.
There appears some validity to these arguments. After all, if developers can codify operational functions into their activities, why do we need the disciple? Perhaps this logic explains why for some, IT Operations as a function is becoming disenfranchised; no longer having a voice into critical digital transformation initiatives – or if it does, just the sound of “No” – the last deployment bottleneck.
The death of IT Ops has been greatly exaggerated
All this NoOps thinking is flawed for a couple of reasons.
Firstly, however operationally awesome developers think they are, building resilience, maintainability and supportability is not always top-of-mind. Worse still, these elements might be neglected if management is fixated on rewarding developers according to ‘speeds and feeds’. Secondly, even if these elements are addressed, they’re often conducted at the end of development cycles or bolted on after problems are discovered – that’s like baking a cake, forgetting the sugar, and then trying to compensate with a sickly sweet chocolate sauce.
In reality, great operations engineers are best equipped to help incorporate operational excellence into all practices. After all they have years of experience supporting every new wave of technology – from Mainframes to Microservices. What must change, however, is how their expertise is developed and shared.
- Become cloud connoisseurs – Like our development colleagues, operations’ emphasis will shift from on-premise to cloud. This means ensuring Ops expertise is fully leveraged to support the business nirvana of delivering high-quality digital customer engagement at scale. So if you’re an IT operations professional still configuring QoS policies on routers or manually provisioning development environments, it’s probably time to skill-up in PaaS, Amazon EC2 and containers. Either that or go and find a gig with a cloud service provider.
- Embed Ops In Dev – This means being less separatist and silo’d and more inclusive and unified. Rather than focus on technical diagnostics and reacting to failures, the new operations professional will apply systems thinking to holistically look at how business applications and the all-important customer experience can be improved. This is easier said than done with skeptical development teams, so it’s essential that monitoring feedback is automatically shared and incorporated into all practices – like for example, agile sprints
- Embrace Lean thinking – this involves being less interrupt-driven – fixing one technical problem after another, to working side-by-side with other teams to ensure a constant flow of value is delivered to business with all waste removed. As with our development colleagues, elements of agile (and common sense) thinking will come to the fore – with terms like “never done” and continuous improvement becoming established elements of the new operations and performance management mantra.
Perhaps the biggest change IT Operations will undergo is how it interfaces with other teams. Here, IT Ops will morph into providing sets of easily accessible and repeatable processes other teams will use to bake quality into everything developed and tested. To this end, operations will transform from a separate “keeping the technical lights on” function, to a high-value to a craft that outfits the software factory and staff with new capabilities they need to ensure speed and quality at scale.
The new age of Agile Operations “craftsmen”
For IT Operations this means letting go of many traditional admin tasks – not just running things, but participating in crafting new automated processes. So rather than routinely monitoring applications in production, the Ops in DevOps will ensure this capability is available in pre-production. Rather than mundanely provisioning servers from change requests, new teams will equip the organizations with a complete release environments that support program level goals.
Operating as DevOps “craftsmen” makes complete sense because it moves expertise out from behind the production curtain. The organizational focus of IT operations positively changes too — from being technically good at describing and fixing problems, to being awesome at prescribing improvements that drive better business outcomes.
Great organizations exploit their business models with flawless operational execution. With these models now being constantly re-shaped by software applications, IT Operations is more important than ever. Relegating it with NoOps thinking isn’t an option.
VMware co-founder Diane Greene will oversee all of Google’s cloud businesses, including its Cloud Platform and Apps productivity suite, the company announced Thursday.
Greene, who has been on the company’s board of directors for three years, took the position as the technology giant agreed to acquire Bebop, a stealthy startup that she co-founded. In a blog post announcing the news, Google CEO Sundar Pichai called the company’s product “a new development platform that makes it easy to build and maintain enterprise applications.”
It’s not clear what exactly that means, but Pichai went on to say that he expects the deal to let more businesses reap the benefits of cloud computing. In addition to Greene, the rest of the team from Bebop is also slated to join Google as part of the acquisition.
Greene served as VMware’s CEO from 1998 until she was fired by the company’s board of directors in 2008. She also serves on Intuit’s board of directors, which she joined in 2006.
Thursday’s news comes as Google is trying to cement its credibility as an enterprise service provider. The company’s cloud platform remains less popular among companies than Amazon Web Services and Microsoft Azure. Google argues that its expertise as an Internet company makes it a logical choice for enterprises looking to buy into the public cloud, but its largest competitors are also big Internet players and have more popular cloud platforms.
Google is also trying to woo larger businesses away from their software contracts with Microsoft by offering to give organizations access to the Google Apps productivity suite for free for the length of their enterprise agreement with the company’s competitor to the north.
On Wednesday, Google’s infrastructure chief, Urs Hölzle, said he expects the company’s cloud platform revenue to eclipse its ad revenue within five years. Greene’s leadership could help the company fulfill those ambitions
I’m very pleased to announce the release of a custom EMR bootstrap action to deploy Apache Drill on a MapR cluster. MapR is the only commercial Hadoop distribution available for Amazon’s Elastic MapReduce service (EMR), and this addition allows EMR users to easily deploy and evaluate the powerful Drill query engine.
The bootstrap action is available at: s3://maprtech-emr/scripts/mapr_drill_bootstrap.sh. It can be invoked as part of a GUI-launched MapR-EMR cluster by simply adding a “Custom action“ to your selection of any MapR cluster (as illustrated in the excerpt of the larger EMR launch panel GUI below):
Use the “Configure and add” button to specify the correct location of the script (s3://maprtech-emr/scripts/mapr_drill_bootstrap.sh). No arguments are necessary… the script always installs the latest version of the mapr-drill package.
Users who prefer to launch clusters using Amazon’s aws command-line tool can add the action via the – bootstrap-actions argument to the “aws emr create-cluster” command (see the documentation at http://docs.aws.amazon.com/cli/latest/reference/emr/create-cluster.html)
Upon successful completion of the cluster launch, the Drill software will be installed on all nodes. Users can access the Drill query engine in one of two ways :
- The sqlline tool, executed from any node in the EMR cluster
- The Drill control console at http://<cluster_master_node>:8047
NOTE: The default EC2 security group for the EMR cluster (usually named “ElasticMapReduce-master”) will NOT allow traffic on port 8047. Users will want to explicitly edit the security group associated with the Master node and enable inbound traffic for that port in order to access the Drill control console.
To get started quickly, simply ssh into the master node of the cluster and execute the command
sqlline –u jdbc:drill:
This will invoke the sqlline command tool and enable Drill queries against any data in the cluster file system. The cluster is also configured to access to some pre-staged data in an Amazon S3 bucket (s3://mapr-public-files/). Sample queries against that data have been saved as /home/hadoop/dquery1.sql and /home/hadoop/dviews.sql on the master node. Running those queries is a simple as
sqlline> !run dquery1.sql
For more information on Apache Drill on MapR, please see the overview and discussion athttps://www.mapr.com/drill. There are some interesting examples and an in-depth discussion about configuring storage plug-ins to access your data.
Details on Amazon’s Elastic MapReduce service and how to plan your cluster (including a discussion of the MapR differentiators) can be found at: http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan.html
In this approach, an enterprise will have a computer to store and process big data. Here data will be stored in an RDBMS like Oracle Database, MS SQL Server or DB2 and sophisticated softwares can be written to interact with the database, process the required data and present it to the users for analysis purpose.
This approach works well where we have less volume of data that can be accommodated by standard database servers, or up to the limit of the processor which is processing the data. But when it comes to dealing with huge amounts of data, it is really a tedious task to process such data through a traditional database server.
Google solved this problem using an algorithm called MapReduce. This algorithm divides the task into small parts and assigns those parts to many computers connected over the network, and collects the results to form the final result dataset.
Above diagram shows various commodity hardwares which could be single CPU machines or servers with higher capacity.
Doug Cutting, Mike Cafarella and team took the solution provided by Google and started an Open Source Project called HADOOP in 2005 and Daug named it after his son’s toy elephant. Now Apache Hadoop is a registered trademark of the Apache Software Foundation.
Hadoop runs applications using the MapReduce algorithm, where the data is processed in parallel on different CPU nodes. In short, Hadoop framework is capabale enough to develop applications capable of running on clusters of computers and they could perform complete statistical analysis for a huge amounts of data.