2019 promises to be a monumental year for cloud technology and adoption. IDC just released a report, IDC FutureScape: Worldwide Cloud 2019 Predictions, that forecasts 10 predictions for the next 5 years. Along with key drivers that are impacting the upcoming changes, the report offers in-depth explanations about its forecasts and their associated IT impacts, and advice for how to approach them strategically.
IDC took into account six primary forces to predict the future of cloud:
The next chapter of digital transformation (and retrofitting the old into the new)
Platforms
Turning data into value
Artificial intelligence
Consumer expectations
Here’s an overview of the top 10 predictions:
According to IDC, “there will be a dramatically accelerated period of cloud-native digital innovation. Organizations will take advantage of cloud and cloud-based AI, blockchain, and hyperagile application technologies such as containers, functions, microservices app architectures, and API-based integration to drive the innovation at an increasingly fast pace.” The paper discusses how technology buyers can strategize to prepare for the future.
NetApp Active IQ provides customers and partners with actionable intelligence on their NetApp environment via a dashboard that summarizes performance, availability, capacity forecasting, health summary, case histories, upgrade recommendations, and more. Every week the system generates about 100TB of data and 225 million files―and it is growing!
As the team responsible for IT operations and fulfilling storage requirements for these rapidly growing, almost insatiable data sets, we struggled on two fronts. As the Active IQ data lake grew, we constantly teetered on exceeding SLA targets for application processing and failing to meet user expectations. It was nerve wracking.
Moreover, we continually hit capacity limitations of the assigned NFS volumes. Every 2-3 weeks, new volumes had to be established with redirects. This drove 24.2 hours of change activity each month as the Command Center dealt with the frequent alerts from exceeded thresholds, the storage team established new volumes, and the application developers updated over 200 servers with the new information. It was a reactive, manual hot mess.
NetApp ONTAP FlexGroup Volumes
To improve the Active IQ data ingestion challenge and it’s growing data lake, we implemented NetApp ONTAP FlexGroup Volumes as it has the capacity to scale up to 20 PB of storage and 400 billion files. The FlexGroup technology allowed us to present a single, scalable storage volume while delivering a 15-20% reduction in overall data processing time from the application side. We have seen a 2x improvement in input/output operations per second (IOPS) performance, 10% more throughput, and lower total average latency. Today we are easily meeting SLA targets with ample headroom.
By implementing FlexGroup, we have simplified operations and removed the tedious manual activities associated with volume changes that happened every 2-3 weeks to once every two years (based on projected data growth). This is because FlexGroup can span multiple nodes and grow capacity non-disruptively, while providing a single namespace. Today when we run out of space, we can add more nodes/constituent volumes to the same FlexGroup volume(s), transparent to the app. We also get to leverage all of the efficiencies of ONTAP like deduplication, compaction, and compression.
The introduction of FlexGroup has come at the right time as the frequency of volume change activity increases for NetApp’s growing Active IQ data lake. We are pleased with how FlexGroup blends near-infinite capacity with predictable, low-latency performance for our metadata-heavy Active IQ workloads.
Success Secrets: How you can Pass NetApp Certification Exams in first attempt
Data center technology moves in cycles. In the current cycle, standard compute servers have largely replaced specialized infrastructure. This holds true in both the enterprise and the public cloud.
Although this standardization has had tremendous benefits, enabling infrastructure and applications to be deployed more quickly and efficiently, the latest computing challenges threaten the status quo. There are clear signs that a new technology cycle is beginning. New computing and data management technology are needed to address a variety of workloads that the “canonical architecture” has difficulty with.
NetApp and NVIDIA share a complementary vision for modernizing both the data center and the cloud. We’re using GPU and data acceleration technologies to address emerging computing workloads like AI, along with many other compute-intensive and HPC workloads, including genomics, ray tracing, analytics, databases, and seismic processing and interpretation. Software libraries and other tools offer support to teams moving applications from CPUs to GPUs; RAPIDS is one recent example that applies to data science.
Server Sprawl and the Emergence of GPU Computing
Server sprawl is a painful reality in many data centers. Even though CPUs get more powerful every year, the total number of servers keeps climbing because:
More CPUs are needed to support the growth of existing workloads
More CPUs are needed to run new workloads
Digital transformation is accelerating the rate at which new application workloads are coming online, making the problem even worse. This is where GPU computing comes in. You’re probably aware that GPUs are being used for deep learning and other AI processing—as well as for bitcoin mining—but these are just the best-known applications in the field of GPU computing.
Beginning in the early 2000s, computer scientists realized that the capabilities that make GPUs well suited for graphics processing could be applied to a wide variety of parallel computing problems. For example, NetApp began partnering with Cisco, NVIDIA, and several energy industry partners to build GPU computing architectures for seismic processing and visualization in 2012. Today’s fastest supercomputers are built with GPUs, and GPUs play an important role in high-performance computing (HPC), analytics, and other data-intensive disciplines.
Because a single GPU can take the place of hundreds of CPUs for these applications, GPUs hold the key to delivering critical results more quickly while reducing server sprawl and cost. For example, a single NVIDIA DGX-2 system takes just 10U of rack space, cutting the infrastructure footprint by 60 times at one-eighth of the cost, compared to a 300-node CPU-only cluster to do the same work.
Data Sprawl Requires a Better Approach to Data Management
The same architectural approach that contributes to server sprawl also creates a second—and more insidious—problem: data sprawl. With the sheer amount of data that most enterprises are dealing with—including relatively new data sources such as industrial IoT—data has to be managed very efficiently, and you have to be extremely judicious with data copies. However, you may already have multiple, separate server clusters to address various needs such as real-time analytics, batch processing, QA, AI, and other functions. A cluster typically contains three copies of data for redundancy and performance, and each separate cluster may have copies of exactly the same datasets. The result is vast data sprawl—with much of your storage consumed to store identical copies of the same data. It’s nearly impossible to manage all that data or to keep copies in sync.
Many enterprises have separate compute clusters to address different use cases, leading to both server sprawl and data sprawl.
Complicating the situation further, the I/O needs of the various clusters shown in the figure are different. How can you reduce data sprawl and deliver the right level of I/O at the right cost for each use case? A more comprehensive approach to data is clearly needed.
Is the Cloud Adding to Your Server and Data Sprawl Challenges?
Most enterprises have adopted a hybrid cloud approach, with some workloads in the cloud and some on the premises. For example, for the workloads shown in the figure, you might want to run your real-time and machine-learning clusters on your premises, with QA and batch processing in the cloud. Even though the cloud lets you flexibly adjust the number of server instances you use in response to changing needs, the total number of instances at any given time is still large and hard to manage. In terms of data sprawl, the cloud could actually make the problem worse. Challenges include:
Moving and synching data between on-premises data centers and the cloud
Delivering necessary I/O performance in the cloud
You may view inexpensive cloud storage such as AWS S3 buckets as an ideal storage tier for cold data, but in practice it too requires a level of efficient data movement and management that may be difficult to achieve.
As we enter 2019, I wanted to pause and reflect on what we accomplished last year as a group. Many people ask for what is the return on investment of WIT? That question can be answered by looking at the results of what we have been able to accomplish—together.
We are a community of peers, a place to brainstorm and learn, a place to practice leadership, a group to push ideas, seek equality, a team to drive diversity. Together, we are better. We are focused, we have goals, we are a global movement.
As a data driven organization, it is fitting that we examine the data for our accomplishments this year. In 2018, membership in the NetApp WIT group grew to 1,240 members across thirteen global site chapters.
In addition to 115 representatives from NetApp, we were excited to participate in two speaking opportunities during the recent 2018 Grace Hopper Celebration in Houston (#GHC18): Jean English along with Kim Weller presented “Data Visionary: Changing the World With Data” as well as Meghan Steele participated as a member on the panel session “Career Confessions: How I became a Leader in Tech” during the conference. Sheila Rohra, VP, Transformation shared the mantra of “Transformation: Stop. Change. Invest” to the audience attending the Grace Hopper Celebration India (#GHCI18).
We were pleased to host standing room crowds at our annual Annual Women in Technology Events held during the NetApp Insight conference in Las Vegas and Barcelona. The Las Vegas panel session was hosted by Jean English and included special guests Kate Swanborg, DreamWorks Animation and Renee Yao, NVIDIA. They were joined by NetApp CEO, George Kurian, along with Sheila Rohra.
Joining myself, Jean English and Kate Swanborg, the Barcelona panel session included NetApp EVP, Worldwide Field and Customer Operations, Henri Richard along with NetApp’s Global Head of Diversity, Inclusion and Belonging, Barb Hardy and NetApp Professional Services Consultant.
In addition to these events, NetApp WIT hosted exciting Executive Speakers training sessions, challenging and collaborating with the WIT community both inside and outside of NetApp. We hosted global leadership webinars for members—including 18 Watermark Leadership webinars.
We sponsored workshops, networking events, and conferences including the Professional Women’s Business Conference, WT2 conference, Massachusetts Women’s Conference, CWIB conference and Simmons Leadership conference. NetApp’s Bhavna Mistry presented at the European Women in Tech Conference.
NetAppWIT actively participates in member driven community outreach efforts including local STEM events, Girls @ Tech, Dress for Success, Operation Dignity Outreach, Ronald McDonald House and #MentorHer.
We were pleased to present the #IAmUnique video series in India, along with many weekly and monthly networking sessions to encourage a sense of community—a place to learn, a place to practice, a place to give back and to serve.
I am looking forward to the exciting opportunities and possibilities for 2019. I encourage you to join us!
Success Secrets: How you can Pass NetApp Certification Exams in first attempt
The Next Mandatory Business Survival Position for Service Provider Future Success
Enterprise IT is changing how it deploys and consumes technology. This means that everyone who sells products or services to enterprise IT must start to think differently about how they go to market. Enterprise IT strategies are changing so fast that it’s getting harder and harder for cloud and hosting providers to keep up with the pace and deliver and deploy the services that enterprises want to consume. As the hyperscale cloud providers grow at a rapid 49% over a 5-year CAGR, traditional service providers are trying to figure out their future cloud strategies to remain relevant to their customers. Enterprise customers have their eyes on a hybrid multicloud end state, and they’re looking to their service provider partners to provide this hybrid multicloud environment in a secure cloud infrastructure model that can be consumed across the data location continuum and at global scale.
This post shares some insider insights about how service providers and VARs are changing their business models to lay the foundation for the future of their business.
Channel Is Changing the Rules and Building Cloud Infrastructure
Traditionally, the channel, VARs, and technology resellers ventured onto their customers’ premises to sell technology, perform break–fix work, and deliver their own unique value proposition around the sales, delivery, and support of the IT infrastructure. When their customers wanted a cloud or hosting solution, they often partnered with a local cloud and hosting provider on behalf of the customer; that was a win-win for all parties.
The challenge, however, was that the ever–tightening margins with VARs began to cause the channel to look for alternate opportunities to drive up margins. On the other hand, cloud and hosting providers were enjoying strong margins of 65% or more. Even during the toughest economic times, they were growing at 20%+ year over year. What began as a mechanism to change the margins for VARs became a cloud and hosting infrastructure delivered and managed by the VAR that targeted enterprise IT without the traditional partnership relationship of cloud and hosting providers.
Service Providers Are Changing Their Models Too
As increasing numbers of VARs are standing up their independent cloud and hosting infrastructures, the traditional service provider brands have had to change how they remain competitive in an ever busier market. Many service providers have made acquisitions to fast track skills learning, accelerate market penetration, and explore entirely new lines of business. Many service providers have also changed their operating models and are moving beyond the walls of their data centers to offer similar managed infrastructure services on enterprise premises and in colocation in an effort to appeal to enterprises that are looking for new cost models of consumption. The current state of the channel and service provider market is complicated, and it will continue to evolve as both sides offer unique value propositions to their customers in very similar and overlapping ways.
Finding a unique position that enables customer success will be crucial for VAR and service provider routes to market, and that position will most likely revolve around where data is positioned in one of the four locations on the customer data continuum.
The customer data continuum: The four locations where customer data can live.
Customer IT services can be delivered to enterprise users from four primary locations:
From the enterprise premises in a traditional enterprise deployment.
In colocation, either managed or unmanaged.
In a typical service provider usually in a multitenant deployment model.
In a public cloud model.
Success Secrets: How you can Pass NetApp Certification Exams in first attempt
The need to digitally transform is creating immense pressure for data center teams who are increasingly expected to be leaders in this transformation. Compounding this pressure are the expectations that today’s IT teams will mimic the best of public cloud services – simplicity, agility, robust service catalogs – within their on-premises tools.
IT consumers have considerable experience with public cloud services and fully expect a comparable swipe-and-go process of dialing up services on-demand within their own organization’s IT department. Developers and business leaders expect their IT teams to automate the provisioning of resources for any workload at any location that is needed. IT teams are being forced to adapt to a changing world where public cloud simplicity is the norm or become obsolete.
NetApp is excited to announce three new validated and proven architectures that simplify the design, deployment and support of a broad set of scenarios and diverse applications allowing you to reduce the time and risk from the process of building a complete solution in your own data center.
1) Build a Private Cloud Foundation. VMware Validated Design for Private Cloud with NetApp HCI. Developed in close partnership with VMware, this solution delivers a thoroughly tested platform upon which IT organizations can automate the provisioning of common repeatable requests and to respond to business needs with more agility and predictability. With this certification, NetApp has become a member of VMware’s Certified Partner Architecture program with the ability to deliver the full VMware Software-Defined Data Center (SDDC) stack.
Enables a self-service catalog that allows users to consume resources with no intervention.
Delivers cost tracking and consumption tracking and analysis across business groups, applications and services.
Ensures fast and highly predictable recovery times to enable application availability and mobility across sites.
2) Enhance the End-User Experience. NetApp Verified Architecture for VMware End-User Computing with NetApp HCI and NVIDIA GPU’s.
Meet end-user demands for a consistent and intuitive experience across devices while ensuring that the business computing environment is consistent, secure, easy to manage, and in continuous compliance.
Easily manage application delivery and user installed applications.
Delivers just-in-time desktops that provide end users with instant access to apps on a fully customized desktop.
Consolidate all applications on one cluster without compromising performance, even at scale.
3) Build Your Data Fabric. NetApp Technical Report for Object Storage with NetApp HCI.
Support hybrid cloud workflows with an on-premise S3 compatible object storage on NetApp HCI to ensure simplicity of data access and retention.
Provide high performance access to “hot” data with primary tier object or block flash storage.
Optimize retention of cold and archived data on a secondary object tier.
Ensure adhere data governance and regulatory requirements while maintaining simplicity of data access.
Success Secrets: How you can Pass NetApp Certification Exams in first attempt