Open-E DSS V7 dramatically improves the performance and feature set of the most affordable, robust data storage software for customers' high availability, cloud storage, virtualization, and business continuity

Open-E has announced today its latest version of Open-E Data Storage Software (DSS) V7 that provides an up to 10X random writes performance improvement over existing volume replication implementations, superior performance with Hyper-V clusters and multi-core CPUs, and expanded support for networking cards and VMware tools. These new capabilities are available immediately at no additional cost to existing customers.

"With over 27,000 installations across more than half the countries in the world, Open-E has a tremendous advantage in customer experience that enables our engineers to tune the Linux-based software to take advantage of the latest features in multi-core processors, virtualization tuning for iSCSI targets, and network cards," said Krzysztof Franek, CEO and president of Open-E. "This engineering insight now implemented in Open-E DSS V7 results in a myriad of newly improved features and performance milestones like advanced system tuning parameters for hardware performance optimization, improved overall performance with multi-core processors and superior performance of up to 10X in random writes of mirrored volumes."

The 10X performance improvement in Open-E DSS V7 Synchronous Volume Replication with Failover feature is a fault tolerance process via iSCSI volume replication that creates mirrored target data volumes. Data is copied in real-time, and every change is immediately mirrored from the primary server to the secondary storage server. In case of a failure, scheduled maintenance of the primary server, or loss of the primary data source, failover automatically switches operations to the secondary storage server, so processes can be continued without interruption. The performance improvement is a significant achievement considering random writes to a volume can result in a large amount of transferred data, even if the actual data changes are small.

The latest release of Open-E DSS V7 continues to provide significant advantages over other competitive offerings including the following enhancements:

  • Advanced system tuning parameters network, iSCSI, Volume Replication, NFS, and IPoIB to deliver performance out of the storage system
  • Superior stability and performance in Hyper-V Clusters
  • Improved overall performance with multi-core CPU optimizing application operation
  • Expanded VMware tools for ESXi 5.x (plug-in)
  • Added support for Adaptec Series 8 (maxCache Plus caching technology)
  • Added tuning tools and support for Mellanox Ethernet cards 40Gbps (mlx4_en, v.2.1) and Mellanox Infiniband cards 40Gbps (mlx4_ib, v.1.0)
  • Expanded capability to assign snapshots to logical volumes via API
  • Updates for support of Kernel 3.4.69, DRBD version 8.4.3, IRQ-balance version 1.0.7
  • Major RAID and NIC drivers and their adaptor configuration tools    

"Open-E is unique in providing robust enterprise level data storage functionality, reliability, and performance for applications in high availability, cloud storage, virtualization and business continuity environments at a fraction of the cost of alternative storage solutions," continued Franek. "Open-E DSS V7 enables organizations of all sizes to continue to know that their business critical data can be protected against system failure, cyber-attack, fire, or natural disaster with one low-cost, easy-to-use, high performance solution."

Massively Scalable I/O Performance, Nanosecond-Class Latency Enable Next Generation of High Performance Data Applications

A3CUBE has announced a groundbreaking 'brain inspired' data plane encapsulated in a Network Interface Card (NIC) designed to bridge supercomputing benefits to the enterprise by dramatically transforming storage networking to eliminate the I/O performance gap between CPU power and data access performance for supercomputing applications.

The RONNIEE Express data plane profoundly elevates PCI Express from a simple interconnect to a new intelligent network fabric, leveraging the ubiquity and standardization of PCIe while solving its inherent performance bottlenecks. A3CUBE's In-Memory Network technology, for the first time, allows direct shared non-coherent global memory across the entire network, enabling global communication based on shared memory segments and direct load/store operations between the nodes. The result is the lowest possible latency, massive scalability and disruptive performance that is orders of magnitude beyond the capabilities of today's network technologies including, Ethernet, InfiniBand and Fibre Channel.

"Organizations struggle to keep up with the amount of traffic on traditional networks generated from a variety of sources," said Bob Laliberte, senior analyst, ESG. "A3CUBE's In-memory Network fabric leverages an innovative approach to transforming HPC, Big Data and data center environments in order to drive greater performance and efficiencies in the network and storage systems. A3CUBE is extending PCIe capabilities in order to deliver a next generation network that it claims will overcome traditional network bottlenecks utilizing a high performance (Nano-second latency) and massively scalable architecture."

The innovative RONNIEE Express data plane enables exascale storage that combines supercomputing's massively parallel operational concepts and an innovative I/O interface eliminating central switching, thanks to the support of a multi-dimensional topology like 2D/3D Torus and Hypercubes. This reduces network overhead, slashing the latency of traditional storage networking designs and introduces military grade reliability along with carrier grade data plane features. The unique RONNIEE Express communication mechanism creates a genuine paradigm shift in network communication that introduces a full application with a transparent memory-to-memory direct connection. The In-Memory Network discards the protocol stack bottleneck and replaces it with a direct memory-to-memory mapped socket, producing extraordinary and disruptive performance enhancements while leveraging commodity hardware.

"Today's data center architectures were never designed to handle the extreme I/O and data access demands of HPC, Hadoop and other Big Data applications," said Emilio Billi, founder and CTO of A3CUBE.  "The scalability and performance limitations inherent in current network designs are too severe to be rescued by incremental enhancements. The only way to accommodate the next generation of high performance data applications is with a radical new design that delivers disruptive performance gains to eradicate the network bottlenecks and unlock true application potential."

A3CUBE's first three products incorporating the RONNIEE Express product line address different data center requirements in building out an In-Memory Network fabric and include:

RONNIEE 2S

RONNIEE 2S is a compact PCIe-based intelligent NIC designed to maximize application performance using a unique combination of hardware and software. RONNIEE 2S Eliminates conventional communications bottlenecks and provides multiple channels with <1 micron and fast direct remote I/O connections with nanoseconds level latency.

RONNIEE RIO

RONNIEE RIO is the first general purpose NIC supporting Ethernet and memory-to-memory transactions in a 3D torus topology that can plug in any server equipped with a PCIe slot. This powerful data fabric is designed to deliver unmatched performances and presents a scalable interconnection fabric based on a patent pending shared memory architecture that implements the concept of distributed non-transparent bridging to greatly extend PCIe features and benefits over a next generation network architecture.

RONNIEE 3

RONNIEE 3 is a revolutionary card that is designed to extend the scalability of RONNIEE 2S and optimized for high performance data environments. The In Memory Network provides full support for memory-to-memory transactions without the usual software overhead to achieve unmatched efficiency and performance compared to ordinary interconnection fabrics available on the market today.

New FC Series appliance validates performance capabilities of the industry’s highest performance storage systems

Load DynamiX has released the industry’s first 16G Fibre Channel performance validation and load generation solution that tests the limits of today’s most complex storage infrastructure and systems. The new Load DynamiX FC Series appliance offers up to eight ports of 16Gb Fibre Channel, that are also fully compatible with 8Gb and 4Gb connections in a 2RU form factor. The small footprint combined with superior performance make the Load DynamiX FC the most cost-effective performance validation solution in the industry.

The Load DynamiX FC Series helps storage engineers and QA teams accelerate the discovery and resolution of performance problems, test scaling limitations, and validate their Fibre Channel-based storage products for faster time to market. IT organizations and cloud service providers can proactively use Load DynamiX to understand storage infrastructure limitations prior to deployment to eliminate the risk of performance-related outages, avoid overprovisioning and ensure SLA adherence.

By combining Load DynamiX Enterprise advanced workload modeling, extensive protocol support and high port density, Load DynamiX appliances deliver superior accuracy with incredible load generation capabilities. The unique combination provides unparalleled insight into application workload and storage system behavior. The Load DynamiX appliances are easy to deploy, and include pre-built test suites within the graphical user interface to improve testing productivity right out of the box. Multiple simultaneous users can run independent tests on any available test port and easily access the shared results.

”Both storage technology vendors and IT organizations with significant investments in Fibre Channel-based storage will find these new performance validation appliances indispensable,” said Philippe Vincent, president and CEO of Load DynamiX. “We’ve raised the bar in Fibre Channel-based storage infrastructure validation while simultaneously increasing the affordability and usability of our advanced workload modeling and load generation solutions.”

The Load Dynamix FC Series is orderable immediately.

Red Hat Storage scales to petabytes, preserves existing IT infrastructure investments and adapts for seamless data growth

Red Hat Storage helps re-engineer the storage functionality for higher education IT environments – including McMaster University and University of Reading – to manage the growing challenges of big data. A truly open, software-only storage solution based on community-driven innovation, Red Hat Storage helps universities manage petabyte-scale workloads to help produce important research and enable more effective online courses.

McMaster University, Hamilton, Ontario, Canada

Founded in 1887 by Senator William McMaster, McMaster University ranks sixth in the country in university research intensity and is known for developing the learning approach, the “McMaster Model”, that is now adopted by universities worldwide. McMaster University was in the middle of a major ERP software deployment when it needed a new storage system with high availability and data replication capabilities. Having already invested heavily in storage hardware, the university searched for a cost-effective and reliable software-defined storage solution and found it in Red Hat Storage Server. The results have given McMaster University the flexibility it needs to take care of both existing and future storage requirements.

The university’s Red Hat Storage Server deployment can be expanded or upgraded in real-time without disrupting operations. One of the primary reasons the university implemented Red Hat Storage Server is that it could continue to add more highly available storage to their environment without compromising on performance and without impacting downtime.

“I showed Red Hat Storage Server to our operations people, who are ultimately the ones who will have to support it, and they were amazed at how intuitive and easy it was to use. We have a heavily virtualized environment; and we wanted software-defined storage, network, and compute to work within that environment. If you export the raw storage that resides in the Red Hat Storage software-defined layer, then you get the flexibility to export it and replicate it however you require. That was absolutely key for us,” saidWayde Nie, lead architect, University Technology Services, McMaster University.

University of Reading, Reading, United Kingdom (U.K.)

Established in 1892, the University of Reading is a leading force in British and international higher education and is ranked in the top one percent of universities in the world. It is the only U.K. university to offer a full range of undergraduate and postgraduate courses in meteorology. The university’s Department of Meteorology needed a highly reliable, available, and scalable storage file system to manage data for its scientific research projects in weather, climate, and earth observation.

With Red Hat Storage Server, the department saves IT staff valuable time previously spent on maintenance and administration. Two new high-capacity servers have been deployed with Red Hat Storage to enable the system to scale to support 300 terabytes of data. Additionally, the department is gradually adding data from older, stand-alone servers to host more than 1 petabyte of data in total.

“My priorities are to ensure not only that the department can store hundreds of terabytes of research data efficiently and securely, but that good performance is maintained as the I/O load from our growing compute cluster increases. When I started talking to Red Hat, I received the assurances I needed that the challenges I’d experienced with managing scale-out storage could be solved. I also knew I’d get the extra help I needed from Red Hat to fine-tune the product for our academic, data-intensive HPC environment,” commented Dan Bretherton, High Performance Computing manager, Department of Meteorology, University of Reading.

Using multiple nodes allows the same bandwidth and performance from a storage network as far more expensive machines

As computers enter ever more areas of our daily lives, the amount of data they produce has grown enormously.

But for this "big data" to be useful it must first be analyzed, meaning it needs to be stored in such a way that it can be accessed quickly when required.

Previously, any data that needed to be accessed in a hurry would be stored in a computer's main memory, or dynamic random access memory (DRAM) — but the size of the datasets now being produced makes this impossible.

So instead, information tends to be stored on multiple hard disks on a number of machines across an Ethernet network. However, this storage architecture considerably increases the time it takes to access the information, according to Sang-Woo Jun, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

"Storing data over a network is slow because there is a significant additional time delay in managing data access across multiple machines in both software and hardware," Jun says. "And if the data does not fit in DRAM, you have to go to secondary storage — hard disks, possibly connected over a network — which is very slow indeed."

Now Jun, fellow CSAIL graduate student Ming Liu, and Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science, have developed a storage system for big-data analytics that can dramatically speed up the time it takes to access information.

The system, which will be presented in February at the International Symposium on Field-Programmable Gate Arrays in Monterey, Calif., is based on a network of flash storage devices.

Flash storage systems perform better at tasks that involve finding random pieces of information from within a large dataset than other technologies. They can typically be randomly accessed in microseconds. This compares to the data "seek time" of hard disks, which is typically four to 12 milliseconds when accessing data from unpredictable locations on demand.

Flash systems also are nonvolatile, meaning they do not lose any of the information they hold if the computer is switched off.

In the storage system, known as BlueDBM — or Blue Database Machine — each flash device is connected to a field-programmable gate array (FPGA) chip to create an individual node. The FPGAs are used not only to control the flash device, but are also capable of performing processing operations on the data itself, Jun says.

"This means we can do some processing close to where the data is [being stored], so we don't always have to move all of the data to the machine to work on it," he says.

What's more, FPGA chips can be linked together using a high-performance serial network, which has a very low latency, or time delay, meaning information from any of the nodes can be accessed within a few nanoseconds. "So if we connect all of our machines using this network, it means any node can access data from any other node with very little performance degradation, [and] it will feel as if the remote data were sitting here locally," Jun says.

Using multiple nodes allows the team to get the same bandwidth and performance from their storage network as far more expensive machines, he adds.

The team has already built a four-node prototype network. However, this was built using 5-year-old parts, and as a result is quite slow.

So they are now building a much faster 16-node prototype network, in which each node will operate at 3 gigabytes per second. The network will have a capacity of 16 to 32 terabytes.

Using the new hardware, Liu is also building a database system designed for use in big-data analytics. The system will use the FPGA chips to perform computation on the data as it is accessed by the host computer, to speed up the process of analyzing the information, Liu says.

"If we're fast enough, if we add the right number of nodes to give us enough bandwidth, we can analyze high-volume scientific data at around 30 frames per second, allowing us to answer user queries at very low latencies, making the system seem real-time," he says. "That would give us an interactive database."

As an example of the type of information the system could be used on, the team has been working with data from a simulation of the universe generated by researchers at the University of Washington. The simulation contains data on all the particles in the universe, across different points in time.

"Scientists need to query this rather enormous dataset to track which particles are interacting with which other particles, but running those kind of queries is time-consuming," Jun says. "We hope to provide a real-time interface that scientists can use to look at the information more easily."

Page 7 of 31