Thursday, February 14, 2019

How Cloud Technology Will Change in 2019 - NetApp Certifications

2019 promises to be a monumental year for cloud technology and adoption. IDC just released a report, IDC FutureScape: Worldwide Cloud 2019 Predictions, that forecasts 10 predictions for the next 5 years. Along with key drivers that are impacting the upcoming changes, the report offers in-depth explanations about its forecasts and their associated IT impacts, and advice for how to approach them strategically.

IDC took into account six primary forces to predict the future of cloud:

The next chapter of digital transformation (and retrofitting the old into the new)
Platforms
Turning data into value
Artificial intelligence
Consumer expectations

Here’s an overview of the top 10 predictions:



According to IDC, “there will be a dramatically accelerated period of cloud-native digital innovation. Organizations will take advantage of cloud and cloud-based AI, blockchain, and hyperagile application technologies such as containers, functions, microservices app architectures, and API-based integration to drive the innovation at an increasingly fast pace.” The paper discusses how technology buyers can strategize to prepare for the future.

Our experts say about NetApp Certification Exams



Thursday, January 31, 2019

ONTAP FlexGroup Technology Powers NetApp’s Massive Active IQ Data Lake - NetApp Certifications


NetApp Active IQ provides customers and partners with actionable intelligence on their NetApp environment via a dashboard that summarizes performance, availability, capacity forecasting, health summary, case histories, upgrade recommendations, and more. Every week the system generates about 100TB of data and 225 million files―and it is growing!

As the team responsible for IT operations and fulfilling storage requirements for these rapidly growing, almost insatiable data sets, we struggled on two fronts. As the Active IQ data lake grew, we constantly teetered on exceeding SLA targets for application processing and failing to meet user expectations. It was nerve wracking.

Moreover, we continually hit capacity limitations of the assigned NFS volumes. Every 2-3 weeks, new volumes had to be established with redirects. This drove 24.2 hours of change activity each month as the Command Center dealt with the frequent alerts from exceeded thresholds, the storage team established new volumes, and the application developers updated over 200 servers with the new information. It was a reactive, manual hot mess.

NetApp ONTAP FlexGroup Volumes


To improve the Active IQ data ingestion challenge and it’s growing data lake, we implemented NetApp ONTAP FlexGroup Volumes as it has the capacity to scale up to 20 PB of storage and 400 billion files. The FlexGroup technology allowed us to present a single, scalable storage volume while delivering a 15-20% reduction in overall data processing time from the application side.  We have seen a 2x improvement in input/output operations per second (IOPS) performance, 10% more throughput, and lower total average latency.  Today we are easily meeting SLA targets with ample headroom.

By implementing FlexGroup, we have simplified operations and removed the tedious manual activities associated with volume changes that happened every 2-3 weeks to once every two years (based on projected data growth). This is because FlexGroup can span multiple nodes and grow capacity non-disruptively, while providing a single namespace. Today when we run out of space, we can add more nodes/constituent volumes to the same FlexGroup volume(s), transparent to the app.  We also get to leverage all of the efficiencies of ONTAP like deduplication, compaction, and compression.

The introduction of FlexGroup has come at the right time as the frequency of volume change activity increases for NetApp’s growing Active IQ data lake. We are pleased with how FlexGroup blends near-infinite capacity with predictable, low-latency performance for our metadata-heavy Active IQ workloads.

Success Secrets: How you can Pass NetApp Certification Exams in first attempt 



Sunday, January 20, 2019

Bridging the CPU and GPU Universes - NetApp Certification Exams VCE Files


Data center technology moves in cycles. In the current cycle, standard compute servers have largely replaced specialized infrastructure. This holds true in both the enterprise and the public cloud.

Although this standardization has had tremendous benefits, enabling infrastructure and applications to be deployed more quickly and efficiently, the latest computing challenges threaten the status quo. There are clear signs that a new technology cycle is beginning. New computing and data management technology are needed to address a variety of workloads that the “canonical architecture” has difficulty with.

NetApp and NVIDIA share a complementary vision for modernizing both the data center and the cloud. We’re using GPU and data acceleration technologies to address emerging computing workloads like AI,  along with many other compute-intensive and HPC workloads, including genomics, ray tracing, analytics, databases, and seismic processing and interpretation. Software libraries and other tools offer support to teams moving applications from CPUs to GPUs; RAPIDS is one recent example that applies to data science.

Server Sprawl and the Emergence of GPU Computing


Server sprawl is a painful reality in many data centers. Even though CPUs get more powerful every year, the total number of servers keeps climbing because:

  • More CPUs are needed to support the growth of existing workloads
  • More CPUs are needed to run new workloads

Digital transformation is accelerating the rate at which new application workloads are coming online, making the problem even worse. This is where GPU computing comes in. You’re probably aware that GPUs are being used for deep learning and other AI processing—as well as for bitcoin mining—but these are just the best-known applications in the field of GPU computing.

Beginning in the early 2000s, computer scientists realized that the capabilities that make GPUs well suited for graphics processing could be applied to a wide variety of parallel computing problems. For example, NetApp began partnering with Cisco, NVIDIA, and several energy industry partners to build GPU computing architectures for seismic processing and visualization in 2012. Today’s fastest supercomputers are built with GPUs, and GPUs play an important role in high-performance computing (HPC), analytics, and other data-intensive disciplines.

Because a single GPU can take the place of hundreds of CPUs for these applications, GPUs hold the key to delivering critical results more quickly while reducing server sprawl and cost. For example, a single NVIDIA DGX-2 system takes just 10U of rack space, cutting the infrastructure footprint by 60 times at one-eighth of the cost, compared to a 300-node CPU-only cluster to do the same work.

Data Sprawl Requires a Better Approach to Data Management


The same architectural approach that contributes to server sprawl also creates a second—and more insidious—problem: data sprawl. With the sheer amount of data that most enterprises are dealing with—including relatively new data sources such as industrial IoT—data has to be managed very efficiently, and you have to be extremely judicious with data copies. However, you may already have multiple, separate server clusters to address various needs such as real-time analytics, batch processing, QA, AI, and other functions. A cluster typically contains three copies of data for redundancy and performance, and each separate cluster may have copies of exactly the same datasets. The result is vast data sprawl—with much of your storage consumed to store identical copies of the same data. It’s nearly impossible to manage all that data or to keep copies in sync.

Many enterprises have separate compute clusters to address different use cases, leading to both server sprawl and data sprawl.
Complicating the situation further, the I/O needs of the various clusters shown in the figure are different. How can you reduce data sprawl and deliver the right level of I/O at the right cost for each use case? A more comprehensive approach to data is clearly needed.

Is the Cloud Adding to Your Server and Data Sprawl Challenges?


Most enterprises have adopted a hybrid cloud approach, with some workloads in the cloud and some on the premises. For example, for the workloads shown in the figure, you might want to run your real-time and machine-learning clusters on your premises, with QA and batch processing in the cloud. Even though the cloud lets you flexibly adjust the number of server instances you use in response to changing needs, the total number of instances at any given time is still large and hard to manage. In terms of data sprawl, the cloud could actually make the problem worse. Challenges include:

  • Moving and synching data between on-premises data centers and the cloud
  • Delivering necessary I/O performance in the cloud

You may view inexpensive cloud storage such as AWS S3 buckets as an ideal storage tier for cold data, but in practice it too  requires a level of efficient data movement and management that may be difficult to achieve.

Our experts say about NetApp Certification Exams



Tuesday, January 8, 2019

Women In Technology 2018 Year In Review - NetApp Certifications


As we enter 2019, I wanted to pause and reflect on what we accomplished last year as a group. Many people ask for what is the return on investment of WIT? That question can be answered by looking at the results of what we have been able to accomplish—together.

We are a community of peers, a place to brainstorm and learn, a place to practice leadership, a group to push ideas, seek equality, a team to drive diversity. Together, we are better. We are focused, we have goals, we are a global movement.

As a data driven organization, it is fitting that we examine the data for our accomplishments this year.  In 2018, membership in the NetApp WIT group grew to 1,240 members across thirteen global site chapters.

In addition to 115 representatives from NetApp, we were excited to participate in two speaking opportunities during the recent 2018 Grace Hopper Celebration in Houston (#GHC18): Jean English along with Kim Weller presented “Data Visionary: Changing the World With Data” as well as Meghan Steele participated as a member on the panel session “Career Confessions: How I became a Leader in Tech” during the conference. Sheila Rohra, VP, Transformation shared the mantra of “Transformation: Stop. Change. Invest” to the audience attending the Grace Hopper Celebration India (#GHCI18).

We were pleased to host standing room crowds at our annual Annual Women in Technology Events held during the NetApp Insight conference in Las Vegas and Barcelona. The Las Vegas panel session was hosted by Jean English and included special guests Kate Swanborg, DreamWorks Animation and Renee Yao, NVIDIA. They were joined by NetApp CEO, George Kurian, along with Sheila Rohra.

Joining myself, Jean English and Kate Swanborg, the Barcelona panel session included NetApp EVP, Worldwide Field and Customer Operations, Henri Richard along with NetApp’s Global Head of Diversity, Inclusion and Belonging, Barb Hardy and NetApp Professional Services Consultant.

In addition to these events, NetApp WIT hosted exciting Executive Speakers training sessions, challenging and collaborating with the WIT community both inside and outside of NetApp. We hosted global leadership webinars for members—including 18 Watermark Leadership webinars.

We sponsored workshops, networking events, and conferences including the Professional Women’s Business Conference, WT2 conference, Massachusetts Women’s Conference, CWIB conference and Simmons Leadership conference. NetApp’s Bhavna Mistry presented at the European Women in Tech Conference.

NetAppWIT actively participates in member driven community outreach efforts including local STEM events, Girls @ Tech, Dress for Success, Operation Dignity Outreach, Ronald McDonald House and #MentorHer.

We were pleased to present the #IAmUnique video series in India, along with many weekly and monthly networking sessions to encourage a sense of community—a place to learn, a place to practice, a place to give back and to serve.

I am looking forward to the exciting opportunities and possibilities for 2019. I encourage you to join us!

Success Secrets: How you can Pass NetApp Certification Exams in first attempt