VMware

vSphere 6 and XtremIO Application Protection

Consolidating and Protecting Enterprise Applications on vSphere 6 with XtremIO

This great Whitepaper was published this month and is well worth a read if you have or are planning to deploy XtremIO.

The paper covers the following:

  • Multi-site VMware based infrastructure with integrated Replication and Disaster Recovery
  • XtremIO 4.0 with: vSphere 6.0 / RecoverPoint 4.1 / Site Recovery Manager 6.1 / VSI plugin 6.6
  • Consolidated business-critical enterprise applications comprising: DSS / Oracle / SQL

Whitepaper

All Flash, XtremIO and Copy Data Management

It’s been a while since my last blog post, however it’s been a very busy year with travel and taking on a new role at EMC (or soon to be Dell) as a technical specialist for XtremIO.

I joined EMC around 10 months ago (time really flies) for one reason and that was to be part of a disruptive storage technology….XtremIO.

Traditional spinning disk technology has not changed in the past 20+ years. 15K drives were introduced in 2000 and physics will not allow disks to get any faster.

Now I’m not saying that disk is dead (like certain other vendors) as it still has a place in the datacenter for unstructured data, backup/archive, big data etc.

Having spent many years in the storage market the main challenge with spinning disk is latency. You have to throw a lot of spindles in an array to ensure application response times are consistent. This is a very expensive exercise and leads to poor utilisation of capacity, datacenter space, plus high power & cooling requirements.

Like many others, XtremIO is an All Flash Array. This is where the similarities end!

Solid State Disks (SSD’s) & other flash media is very fast….most of you already know this as you have them in your laptops and smart phones.

Whilst speed and IOP’s are important it’s not the most important factor. In fact, flash will give you way more IOP’s that you’ll probably ever use!

As I mentioned before, latency and application response times are the most critical and the key with latency is to be consistently low.

Flash storage is an enabler to deliver consistent low-latency, however the storage architecture you build around it is the key to success or failure!

Most vendors (including EMC) started out by adding flash as a front-end cache to accelerate reads/writes on existing disk arrays, we then saw the next generation of hybrid-storage arrays/appliances that were purpose built with both SSD’s and disk. Around 2010/2011 the first All Flash Arrays started to appear on the market.

This new generation of All Flash Arrays (AFA’s) came along with software that was optimised for NAND media. I won’t go into the details and nuances of NAND flash as there are plenty of other sites that cover this very well.

The major drawback with these new AFA’s is the fact they were designed using active/passive/standby dual controller architectures. Nothing radically different from previous arrays. This design approach is fine for spinning disk arrays as the performance characteristics of the media is much lower, but with flash media it’s a whole different ball game.

Dual controller architectures can very quickly become a bottleneck for performance especially when you add additional capacity. This type of architecture is commonly known as ‘Scale-up’.

So, what happens when you hit the limits of the storage controller? You can either move workloads off to somewhere else or buy another AFA and have seperate islands of storage (sounds no different to what we did before, except the media is different!).

A certain vendor will offer to swap out controllers, however the devil is in the details. I’d strongly recommend you read the T’s & C’s.

What attracted me to XtremIO is that it’s different to other AFA’s. The guys in Israel who designed the platform (before EMC acquired it in 2012) really knew their stuff when it came to flash. Rather than just following everyone else down the path of a dual controller architecture, they chose the hard path and built a true multi-node ‘scale-out’ cluster. For more information on this see the following White Paper: Technical Introduction to XtremIO

Moving on from performance and the importance of architecture, AFA’s must be able to do other stuff. Being fast is just the beginning.

Data Services and application integration are what makes any array truely valuable to a business.

Deduplication and compression are pretty much table stakes for AFA’s. Snapshots/Clones less so and milage will vary.

Copy Data Management is one of the hot areas that XtremIO is focussed on. CDM allows application owners and developers to create multiple copies of key datasets in a consistent, fast and space efficient manner without impacting the performance of production on the same AFA.

As the latest StarWars movie was released last month I put together a themed video outlining XtremIO Copy Data Management. Enjoy!

Latest Tintri Announcements at VMware Partner Exchange 2015

It’s been a busy week at VMWare PEX in San Francisco for Tintri with the announcement of the following:

vSphere 6.0 support

VMware Virtual Volume (VVOLs) support

VMware Integrated OpenStack (VIO) support

New Tintri Plug-in for VMware product – vRealize Suite

VMware vRealize Operations (vROps) Management Pack – (technical preview)

Finally VMware has announced the availability of the VVOL API in the upcoming vSphere 6 release. Like myself, I am sure many of you have been waiting patiently for this to happen after seeing this evolve over the past 3+ years. Just goes to show that storage for virtualisation is not an easy thing to get right. Here at Tintri we are planning to support 1M VVOL’s on the current T880 platform. For more on an insight into the world of VVOL’s see the following blog from the Tintri CTO and Co-founder Kieran Harty: http://www.tintri.com/blog/2015/02/vmware-vvol-and-tintri

Tintri support for VMware Integrated OpenStack will allow customers who wish to run an OpenStack environment on their existing VMware infrastructure to leverage both the enterprise stability and features of VMware vSphere as well as the storage insight and performance efficiencies of their Tintri storage. More to come on the OpenStack side of things in the coming weeks/months.

On the management side, anyone who is using vCOPS (now vRealize Operations) and Tintri will have an integrated solution.

The Tintri Management Pack for vRealize Operations exposes key performance indicators (KPIs) for your Tintri storage to vRealize dashboards. It’s easy to install and allows you to have a consistent view of VMs and Tintri application-aware storage. The integration simplifies how cloud administrators monitor performance and capacity. By linking VMs and storage, it also helps with planning and forecasting. The image below shows a vRealize dashboard that includes the Tintri Management Pack.

VCopsTintri

The Tintri Management Pack will initially be available for vCOPS 5.8. Expect support for vRealize 6.0 in the coming months.

Words from a Tintri Customer in Australia

I recently caught up with Deakin University one of our first customers in Australia to deploy Tintri.

Deakin University is one of Australia’s new generation of universities. Deakin University was established in 1974. It has four modern campuses located in the vicinity of Melbourne and the Great Ocean Road: one in metropolitan Melbourne, two in the bayside town of Geelong, and one in the picturesque regional seaside centre of Warrnambool. These campuses provide students and researchers with access to the latest industry-standard facilities such as Deakin’s Motion.Lab at the Melbourne Campus and the Geelong Technology Precinct at the Geelong Campus, Waurn Ponds. Deakin University has a strong international focus: around 7,900 (approximately 19%) of its 42,000 student body are from over 60 countries worldwide.

Deakin have 2 x T540’s located at the Geelong and Melbourne Campus’s (~100KM apart) and use ReplicateVM between these sites.

In my interview with Robert Ruge, Systems & Network Manager, School of IT, he had the following to say:

What made you choose Tintri vs your current storage?

I have been running my current storage for a few years now and every so often we would have an incident that would take our storage offline for just a bit too long which forced the VM’s to go offline. I think that we have now remedied this problem but at the same time the opportunity arose to look at our current storage solutions and to either extend or the current solution or to look at a more suitable product. I believe that a little storage diversity is good for operations so that if one product is giving problems your can move your VM’s around. I looked at iSCSI solutions but felt that I didn’t want to complicate our environment with another access method as I was quite happy with the current NFS access method. Tintri then came on the scene with a product that filled my requirements of being NFS based, SSD accelerated, stable, expandable and future looking.

How many VM’s are you currently running and what type of workloads?

We are currently running just over 70 VM’s on one VMstore with a second VMstore as a replication partner for DR. As I become more comfortable with the capacity for the VMstore to handle the workload I will move more workload onto it.
The VM workload is varied. Some of them are infrastructure and management VM’s, some are low usage SQL server database VM’s, but the majority are HPC VM’s for our research staff to use for their computational requirements. These are a mixture of Linux and Windows server.

Has the Vmstore made a difference to how you manage your virtual environment?

Yes it has made a difference to how I manage my virtual environment. I now have to do less management of the storage as it really is a simple appliance to setup and manage, plus the replication is a no brainer to setup and leave running. I had to be much more hands on with the replication on my existing storage.

What are the main features of the Vmstore that you most use/like?

Simplicity of management, economy of storage, replication and the dashboard. What’s not to like.

Other comments/feedback (positive and negative)?

Keep up the good work and keep expanding the solutions to which the appliance can be deployed to.

VMworld 2014

Tintri today announced it will be a Platinum Sponsor at VMworld 2014 taking place August 24-28 at Moscone Center in San Francisco. The company will showcase new products with deeper integration with VMware products in booth 921. In addition, Tintri will present a number of breakout sessions with customers and participate in industry panels throughout the conference.

Highlights:

BCO 1811 – Top 10 Do’s/Don’ts of Data Protection using VADP.
Dominic Cheah, Technical Marketing Engineer will present lessons learned about the various backup transport modes VADP provides for data protection, what pitfalls to avoid as well as recommended best practices with HotAdd, NBD and NBDSSL.

EUC 2352 – Horizon View in a K-12 School District
Tintri customer Chad Marlow and Tom Alexander from Enumclaw School District will discuss the process they used to justify, plan, and deploy VMware Horizon View and Tintri.

SPO2409 – SPO – Storage in the Cloud: Taxonomy and Trends
Rex Walters, VP of Technology will discuss the various approaches to persistent storage in virtualized environments, whether using traditional on-premises virtual infrastructure, or emerging off-premises public and private clouds.

STO3029 – Real World Private Cloud: Tips and Tricks from a Fortune 1000 Enterprise Cloud Project
A Tintri customer will join Chuck Dubuque, Sr. Director of Product and Solution Marketing to discuss

How they defined their private cloud in terms of functionality, capabilities, and SLA

Hidden requirements and gotchas they encountered

Capabilities before and after private cloud

STO3034 – Taking Application-Aware Storage to the Next Level with PowerCLI and PowerShell Justin Lauer, Principal Technology Evangelist, VCP4 and VCP5, a 6-time VMware vExpert and a former VMUG leader, will team up with John Phillips, a member of the technical staff to address how to leverage the task based automation and configuration management of Powershell to dramatically simplify your storage operation.

Converged Infrastructures – Curing the Symptoms Follow Up

In follow up to my colleague’s recent blog (link here) I thought I would add a few words on the subject……….

In a Converged Infrastructure stack storage is one of the main areas that is subject to change and has the greatest impact on the ability to run workloads.

As mentioned in the previous blog post, CI’s were developed to help simplify a complex solution stack. How so? Once it’s built you still need people who understand all levels and be an expert in Compute, Network and Storage. Let me tell you from experience, managing a NetApp platform running Clustered ONTAP is not a simple thing (don’t mean to single NetApp out here, but this was my background for many years).

Having Reference Architecture is great, but it is only the beginning. How do you adapt your infrastructure as workloads and business requirements change?

If you are a service provider or offer a Private Cloud service to the business, how does your CI need to change to take on additional workloads or when existing workloads change?

This results in having to re-design compute and storage. Compute is more simplistic as you can simply add more blades or servers, however storage is not so easy as it’s been specifically designed around know quantities (IOP’s, throughput etc.), which translates into spindles, RAID groups, Volumes, LUN’s and datastores etc. Take a look at a typical storage layout for a VDI solution in the diagram below.

VDI layout

As an administrator how do you know what storage resource is available to run additional VM workloads and how do you get this information?

If a workload is seeing poor performance, how to you go about troubleshooting and isolating the issue?

If I need to add more workload can I without impacting other VM’s?

How many different UI’s and tools would you need from NetApp and EMC to answer the above two questions?

So, how we make a Converged Infrastructure really live up to the expectations it sells? Make it simple. By this I don’t just mean having a blueprint document to follow that builds something. It needs to be truly simple to deploy and manage.

Make the stack easy for a general IT administrator to manage without needing any in-depth storage skills.

This is where Tintri shines. It eliminates the need to follow any special best practices. With all the advanced functionality comes the self-learning technology that implements and allocates resources based on the requirements. So one doesn’t have to the pre-work in terms of design.

All you have to figure out is the capacity. Once done, the storage can almost be ignored. After deployment it continues to provide customers with deep analytics across the infrastructure-helping make the right decisions faster.

Unlike reference architecture, it is dynamic and adjusts to the needs of the applications and VMs, therefore maintaining the benefits across the lifecycle of the deployment. It is self-tuning and has all the REST-APIs in case customers want to bring in some automation. The automation that can be achieved using Tintri REST-APIs is unmatched given the kind of analytics that are available at customer’s disposal.

So, I would conclude it by stating that Reference Architectures and Converged Infrastructures are great but what’s important is the constituents of the Converged Infrastructure and how they simplify not only the initial deployment but ongoing operations as well.

Tintri DataStores – A Customers Perspective

Great write up from a Tintri customer……

Neeshan Peters

After having some time to spend with Tintri datastores I can honestly say it is the most administratively liberating storage that I have ever managed, and I use the term managed loosely. First, the installation of the storage array was quite easy. Take it out of the box, plug in your power, management cables, data cables and turn it on. That’s it for the physical part. For the logical side, you have to give it an IP address and connect it to vCenter. Yes that’s it. No LUNs to carve, no extents, no special setups. Not many storage providers can come close to that kind of setup ease.

The hardware is also surprisingly simple. The magic in these storage arrays come from an intimate knowledge of VMware and what the consumer needs from it. That is mostly encompassed in the firmware, logic and higher functions of the array that you…

View original post 558 more words

Tintri VM-awareness extended to vCenter UI

Tintri already provides one of the easiest and most user friendly management interfaces I’ve ever seen on a storage platform. Today sees the announcement of their new management plugin for Virtual Center which gives virtual administrators access and visibility to their VMstore environment in the familiar vSphere web UI format. The plugin is free to all Tintri customers and will be available for download on March 24th.

Tintri now offers management capabilities at the following levels:

  • VMStore UI – Individual node management console
  • vSphere Web UI – Multiple VMstore/Datastore management from Virtual Center
  • Tintri Global Center – Manage up to 32 VMstores from a single UI

Here’s a rundown on the key features of the new plugin:

Centralised VMstore Monitoring

  • VMstore level dashboard
  • Alert monitoring & management
  • Performance & Space resource gauges and changers
  • Real-time and historic monitoring

Easy Configuration

  • Install & setup in a few minutes
  • Scales incrementally by adding more VMstores
  • Centrally updates VMstore configuration and Hypervisor optimisation settings

Per-VM Data Management

  • Space efficient snapshots and clones
  • WAN efficient replication
  • Instantly diagnose VM performance issues with end-to-end latency breakdown

I’ll walk you through some of these capabilities:

Below is a screenshot of the Datastore overview panel. From here you get a summary view of the performance & capacity data, plus a list of the main VM’s that have changed their resource consumption.

Tintri VCP VMStore Overview

Each panel in the in the Datatsore overview window can be expanded. Below is a view of the Datastore latency and you can see the end-to-end breakdown by clicking on any point on the chart. You can also see a list of the top VM contributors on the right.

Tintri VCP TopLatencyConributors

You can look at each VM’s latency in more detail.

Tintri VCP VMsummary latency

Adding additional VMstores is very simple and applying optimisation/best practice setting can be done my right-clicking each Datastore.

Each Datastore will be added to all ESX hosts

Tintri VCP Add VMstore To All Hosts

Then apply the best practice setting.

Tintri VCP ApplyBestPractices

Once applied you can then view these settings

Tintri VCP ApplyBestPractices detail

By right-clicking on a VM you can see the Protection options. From here you can take a Snapshot, clone or replicate.

Tintri VCP Protect Replication Snaps

Plus you can set the Snapshot & Replication schedules:

Tintri VCP ScheduleSnaps

To see the product in action there is great demo on the Tintri website. Click here.

Tintri VM Granular Level Capabilities

I’m now into my 2nd week at Tintri and have been blown away by the capabilities of the VMstore. As mentioned in my previous blog post, everything Tintri does it at the VM object level (VM and vDisk).

So, what can Tintri do at a VM/vDisk level?

Firstly you get VM granular snapshots. Other storage vendors may lay claim to this, however behind the smoke and mirrors their snapshots still happen at a Volume or LUN level. What you end up with is a snapshot that contains all your VM’s on a given datastore irrespective of the fact that you only want to snap one.

Secondly, you get VM granular replication. This is a big deal.

Thirdly, VM granular Clones.

Other areas include VM level QoS & analytics/reporting. I briefly covered these off in my last post, but I would like to go into each of these in more detail soon.

Tintri SnapVM

Snapshots preserve the state and data of a VM at a specific point in time allowing VMs to be easily rolled back or replicated. However, as I’ve already mentioned, traditional storage architectures provide snapshots of storage objects, such as LUNs and volumes, rather than VMs. These snapshots can lead to inefficient storage utilisation as tens-to-hundreds of VMs with varying change rates are snapshotted at once. Snapshot schedules can only be set at a LUN or a volume level, leading to such practices as creating one LUN per VM as a workaround to create individualised snapshot VM schedules.

Tintri VMstore delivers unique, space-efficient snapshot capability that consumes virtually zero disk space and can restore VMs within minutes or even seconds. In addition, granular VM snapshots allow administrators to create snapshots of individual VMs and quickly recover data or entire VMs from snapshots. Tintri VMstore supports 128 snapshots per VM for longer-term retention. Data protection management is also simplified with the use of default or custom schedules for VM-consistent or crash-consistent snapshots that protect individual VM automatically without administrator intervention.

Crash consistent snapshots do not take extra measures with the hypervisor or guest VM to coordinate snapshots. Thanks to integration with native hypervisor management tools, such as VMware vCenter integration, Tinter provides VM-consistent snapshots for simpler application recovery. With VM-consistent snapshots, hypervisor management APIs are invoked to quiesce the application in a VM for a VM-consistent snapshot. Unlike storage-centric snapshot technologies in traditional shared storage systems, Tintri SnapVM makes recovery workflows remarkably easy. Files from individual VMs can be recovered without additional management overhead, dramatically reducing the time to recovery.

Scheduling and creating snapshots is very easy. Simply right-click on the VM and choose the Protect option. From here it takes you to the following UI:

Per VM Snapshot

On the above screenshot we have Tintri ReplicateVM configured so the VM snapshot can be replicated to a secondary VMstore. We can also specify a different retention period for the snapshot at the remote site.

Tintri ReplicateVM

Tintri ReplicateVM capability supports efficient replication of VMs from a primary to a second VMstore. Tintri ReplicateVM is based on Tintri snapshot technology, allowing either a new or existing snapshot to be replicated automatically.
Like SnapVM, ReplicateVM enables administrators to apply protection policies to individual VMs, rather than to units of storage such as volumes or LUNs. It allows administrators to easily establish, as-needed, a snapshot and replication policy for individual or set of VMs.

ReplicateVM works by replicating de-duplicated and compressed snapshots of VMs from one Tintri VMstore to another, only sending across the network actual changed blocks or missing data. As a result, VM replication is highly WAN-efficient with up to 95 percent reduction in bandwidth utilisation. It also enables remote cloning, making distribution of golden images for workloads such as VDI with multisite high availability (HA) efficient and simple. ReplicateVM supports different topologies including one to-one, many-to-one and bi-directional replication.

Replication can be dedicated to specific network interfaces, and optionally throttled to limit the rate of replication when replicating snapshots between Tintri VMstore appliances located in datacenters connected over wide-area networks (WANs)

Tintri CloneVM

A few traditional storage systems can provide cloning capabilities sharing data blocks between the source/parent and the clone. Unfortunately these clones are done at the LUN or volume level, which can vastly complicate VM deployment, cloning and management operations. Tintri CloneVM™ enables space-efficient cloning operations at the individual VM level. This eliminates the limitations of traditional storage architectures that necessitate complex provisioning and management.

VMstore builds on snapshots to support individual VM cloning capability, either by taking a new snapshot or by cloning an existing snapshot. Hundreds of clones can be created virtually in an instant, all of which are space efficient and full-performance. Cloned VMs can be quickly accessed, powered on and put into service, enabling more efficient use cases such as virtual desktop infrastructure (VDI), development and test, business intelligence and database testing.

New VMs created via cloning exist and function independently from the parent VMs from which they are created. Behind the scenes, new VMs share common vdisk references with parent VM snapshots to maximise space and performance efficiencies. The extent to which they individually grow and diverge from the data they share with their respective parents defines their incremental storage space requirements. Tintri’s patented use of flash assures that clones are 100 percent performance-efficient. They get the same level of performance as any other VM stored on a Tintri VMstore system.

Using the Tintri UI, hundreds of clone VMs can be created at a time. The cloned VMs are dynamically registered and visible to the hypervisor for immediate use. Administrators can also select customisable specifications defined in vCenter for preparing newly created clone VMs using the vCenter sysprep functionality. Further, clones can also be created from golden image VMs for use cases such as test and development and VDI.

Starting a new life at Tintri – The VM aware storage company


Tintri logo Virtualise More RGB 400px

After nearly 8 years at NetApp I decided to leave and take up a pre-sales position with Tintri Inc in Australia/New Zealand, starting on the 17th February. (Now for those who don’t know Tintri I will cover this off later in the post).

During my time at NetApp I have made some life long friends and had the opportunity to work with some of the best people in the industry, but for me it was time to move on and seek out new challenges.

I am very excited about the opportunity and joining at a time when Tintri is starting out in ANZ. Lot’s to do in the first few weeks, including building a demo environment once my shiny new Tintri T620 arrives.

So, who are Tintri? If you’re a vExpert you will have probably received a new Polo from them every year.

Tintri started out in 2008 in Mountain View, CA with the vision of creating a storage platform that was able to provide a better synergy between virtualisation and storage.

Using the latest advancements in flash, processing and networking they came up with ‘VMStore’ which is still the industry’s only VM-Aware storage appliance for VMware (with KVM and Hyper-V to follow soon).

Being both a virtualisation and storage guy this is a very attractive proposition to me.

Having come from NetApp and worked with other storage vendors in the past (including EMC) one of the main challenges is designing, configuring and tuning traditional storage arrays for virtual environments. If you think about it, these arrays were designed for a very different world where workloads were hosted on physical servers. Now, I’m not saying they don’t work, they just need a lot more management, tuning & tweaking to handle the ever increasing virtual workloads.

Pretty much all storage arrays see the world in terms of Volumes and LUN’s and have no visibility or awareness of the workloads being placed on them, it’s just IO. This is where the disconnection is. How are you meant to get the best performance and capacity utilisation if you have no real visibility of what the VM’s and applications are doing?

Tintri 1

Storage should understand and operate at the VM level.

As a storage guy I have spent many hours thinking about how I lay out my data stores (LUN’s or Volumes) on the available storage I have. NetApp makes it a little easier as you don’t need to think about RAID groups, but it’s still complex regardless.

The bottom line is that virtualisation has changed the way data is managed.

The Tintri VMstore is a VM-Aware Storage Architecture. This means it understands and operates on VMs and virtual disks — instead of conventional storage objects such as volumes and LUNs. All storage operations such as snapshots, clones, and replication are done at VM level, which eliminates the need to deal with underlying the complexity of traditional storage.

Where has all the storage complexity gone? Simple. The VMstore is presented to the hypervisor as a single large NFS datastore (currently up to 33.5TB usable per unit). If you need more, just add a new VMstore and away you go. No more LUN’s, RAID configuration etc etc.

The functionality underlying Tintri VMstore™ can be categorised in three sections:

1. Storage intelligence: Delivering performance, density and scalability without the complexity.

2. Infrastructure insight: Delivering a complete picture of virtualised workloads.

3. VM control: Delivering VM-granular data management and automation.

Storage Intelligence

Tintri VMstore’s approach automatically ensures every VM gets the performance it needs. Expanding storage is simple, as each VMstore appliance appears as an additional, high-capacity datastore in VMware vCenter. This makes it easy to scale and manage each node as part of a VMware Storage DRS cluster and eliminates any risk of downtime.

Lets see how:

Tintri FlashFirst™ design: VMstore is a hybrid storage solution. It uses a combination of flash-based solid-state devices (SSDs) and high-capacity disk drives for storage. Tintri’s patented FlashFirst design incorporates algorithms for inline deduplication, compression and working set analysis to service more than 99 percent of all IO from flash for very high levels of throughput and consistent sub-millisecond latencies for both read
and write operations.

Tintri 3

Flash-first design minimises swap between SSD and HDD by leveraging data reduction in the form of deduplication and compression to increase the amount of data that can be stored on flash. This is complemented by detailed profiling of all the active VM IO to ensure metadata and active data are kept in high performance flash. Only cold data is evicted to disk, which does not impact application performance. It takes advantage of the fact that each VM has an active working set, which is a fraction of the overall VM. Using a flash-only approach means all data must be stored on high performance (and expensive) flash, whether it needs to be there or not.

Unlike flash-only products, 100 percent of the operational flash capacity on a Tintri VMstore can be used without concern about running out of space and having applications come to a screeching halt. In addition, the Tintri VMstore is operationally far simpler and more cost-effective than flash-only products.

Traditional storage systems often incorporate flash to an existing disk-based architecture, using it as a cache or bolt-on tier, while continuing to use disk IO as part of the basic data path. In comparison, VMstore services 99 percent IO requests directly from flash, thereby achieving dramatically lower flash-level latencies, while delivering the cost advantages of disk storage.

Tintri’s innovative FlashFirst design addresses MLC flash problems that previously made it unsuitable for enterprise environments: Flash suffers from high levels of write amplification due to the asymmetry between the size of blocks being written and the size of erasure blocks for flash. Unchecked, this reduces random write throughput by more than 100 times, introduces latency spikes and dramatically reduces flash lifetime. FlashFirst design uses a variety of techniques including deduplication, compression, analysis of IO, wear leveling, garbage collection algorithms and SMART (Self-Monitoring, Analysis and Reporting Technology) monitoring of flash devices and dual parity RAID-6 to handle write amplification, ensure longevity and safeguard against failures.

VM QoS: Tintri VMstore is designed to support a mixed workload of hundreds of VMs each with a unique IO configuration. Additionally, as volumes of traffic ebb and flow, through its FlashFirst design analyses and tracks the IO for each VM, delivering consistent performance where it is needed. This enables it to isolate the VMs, queue and allocate critical system resources such as networking, flash/SSDs and system processing to individual VMs. The QoS capability is complementary to VMware’s performance management capability.

The result is consistent performance where it is needed. And, all of VM QoS functionality is transparent, so there is no need to manually tune the array or perform any administrative touch.

QoS is critical when storage must support high performance databases generating plenty of IO alongside latency sensitive virtual desktops. This is commonly referred to as the noisy neighbour problem in traditional storage architectures that are flash-only and lack VM-granular QoS. Tintri VMstore ensures database IO does not starve the virtual desktops, making it possible to have thousands of VMs served from the same storage system.

Scaling storage: Scaling storage beyond the performance and capacity of a single system is as simple as adding another VMstore system – a task that takes less than two minutes. This building block approach effectively adds another datastore that can be managed by the virtualisation layer. To tackle the challenge of managing individual VMstore systems, Tintri has created Tintri Global Center. Build on a solid architectural foundation is capable of supporting over one million VMs, Tintri Global Center is an intuitive centralised control platform that lets administrators monitor and administer multiple VMstore systems as one.

Infrastructure Insight

When I logged into the management UI for the first time I have to say I was very surprised (in a good way). It’s very clean and uncluttered. A great deal of thought has gone into it’s design and layout.

Essentially the Dashboard is designed to help you draw quick conclusions about your VMstore’s health, identify problems, and help you make informed resource management decisions. Dashboard numbers are refreshed every 10 seconds, and performance reserves and space changers are refreshed every 10 minutes.

Space and Performance reserves changers are VMs that experience the largest change in reserves & space within the last week.

You can drill down on the VM, examine its historic or real time trends, and correlate the time of any I/O spikes to suspicious activities on the business application.

In addition, when the datastore is unexpectedly running low on space, refer to the space changers list to find potential culprits.

Tintri dashboard 2011 09 20

As I have mentioned before, traditional storage systems provide a performance view from the LUN, volume or file-system standpoint, but cannot isolate VM performance or provide insight into VM-level performance characteristics.
It is difficult for administrators to understand situations such as the impact of a new VM workload, without access to relevant VM performance metrics. In addition, identifying the cause of performance bottlenecks is a time consuming, frustrating and sometimes inconclusive process that requires iteratively gathering data, analysing the data to form a hypothesis and then testing the hypothesis. In large enterprises, this process often involves coordination between several people and departments, typically spanning many days or even weeks.

Tintri provides a complete, comprehensive view of VMs including end-to-end tracking and visualisation of performance across the entire data center infrastructure. This ensures that administrators can procure the critical statistics they need. The goal is ensuring storage performance stays at acceptable levels with minimal latency.

Tintri VMstore monitors every IO request at the vdisk and VM level and can determine if latency is being incurred at the hypervisor, network, or storage levels. For each VM and vdisk stored on the system, enterprise IT teams can use VMstore to instantly visualise where potential performance issues may exist, including on the host, network or storage. The latency statistics are displayed in an intuitive forma. In an instant, administrators can see the bottleneck rather than trying to deduce where it is from indirect measurements and time-consuming detective work.

The hypervisor latencies are obtained using vCenter APIs, while the network, file system and disk latencies are provided by Tintri VMstore, which knows the identity of the corresponding VM for each IO request.

Administrators can detect trends with this data from the VMstore and individual VMs, all without the added complexity of installing and maintaining separate software. This built-in insight can reduce costs and simplify planning activities — especially around virtualising IO-intensive critical applications and end-user desktops.

Tintri 4

VM Control

Tintri VMstore allows all data management operations — snapshots, clones and replication — to operate at the VM level. This enables managing large-scale virtual environments to the vdisk level with complete, granular control. I will cover more about each of these areas in a further blog post soon.

In addition all VAAI capabilities for NFS are fully supported and a management console for the vSphere Web client will be available very soon.