Blog_Images_starterkit

The Answer to Hybrid Cloud: The Starter Kit for Microsoft Azure

During the 2014 Microsoft Worldwide Partner Conference last week, Carpathia announced the launch of an exciting new hybrid cloud offering with long-term partner, Equinix: The Starter Kit for Microsoft Azure.

Designed to help businesses and federal agencies overcome the challenges of balancing security and compliance with the cost saving benefits of the public cloud, Continue reading…

Tags: , , ,

5 Tips for Maintaining Compliance in Healthcare

As part of the healthcare industry, we’re sure you’re aware that it’s critical to protect patients’ information from unauthorized access. That’s exactly why federal regulations, including HIPAA, are in place to safeguard the accidental distribution of information. Healthcare providers and others who work in the healthcare industry need to invest time and energy into achieving and maintaining compliance. Continue reading…

Tags: , , , ,

Meaningful Use Explained

The term “Meaningful Use” has caused some confusion in the healthcare industry.  Meaningful Use refers to the federal government’s standards for Electronic Health Records (EHR1 systems. These standards include rules, as well as incentives and penalties for healthcare professionals and hospitals. Check out our infographic, explaining the aspects of Meaningful Use. Continue reading…

Tags: ,

Nine Criteria for Selecting a HaaS Solution

As the amount of data and information produced and made accessible via technology continues to grow exponentially, it is expected that big data will become as integral to business and society as the Internet has become over the past 25 years.

And as more companies are developing a need for big data analytics, the number of Hadoop service options is increasing. Choosing the right solution is crucial for your company’s success – but as we’ve often heard from customers, cutting through the hype can be a daunting task.

To distinguish fact from fiction, here are nine key criteria we recommend you consider when selecting a Hadoop-as-a-Service (HaaS1 solution. If you’re interested in additional information, you can find these points and more in our recent eBook, “Hadoop As A Service: Not all Offerings are Created Equal.”  You might also want to check out the GigaOm Research Paper, “Understanding the power of Hadoop as a Service.”1more–>

1. The Appropriate Interface for Each Type of User

The core features of a Hadoop solution must meet the needs of both data scientists and Hadoop administrators.

Data scientists typically desire a functionally rich and powerful environment. Can data scientists easily run Hadoop YARN jobs through Hive, Pig, R, Mahout and other data science tools? Are these services immediately available when the data scientist logs into the service to begin work? An “always on” Hadoop service avoids delay in starting and stabilizing a cluster.

In contrast to data scientists, for systems administrators, less is more. A streamlined and simple interface for system management tasks allows administrators to perform tasks quickly and with a minimal number of steps. If sets of parameters are exposed in the interface, they likely need configuration. On the other hand, if low-level monitoring details are left to the HaaS provider, the administrative interface can simply report on the overall health and SLA-compliance of the service.

2. Data at Rest is Stored in HDFS, Hadoop’s File System

HDFS features are industry-tested to provide cost effective, reliable storage at scale. The file system is optimized to work efficiently with MapReduce and Yarn-based applications. A distinct advantage over third-party storage systems is that using HDFS avoids the latency and constant cost of translating data from another format to HDFS.

HDFS is compatible with Hadoop’s growing ecosystem of third-party applications. In addition, HDFS features continue to evolve with all the redundancy and data recovery features of competing storage solutions.

3. Self-Configuring

HaaS solutions should dynamically configure the optimal number and type of nodes and automatically determine tuning parameters based on the type of workload and storage requirements of an application. These optimized environments dramatically reduce human error and administration time, as well as provide results faster than customer-tuned environments.

4. True Elasticity

The bursty nature of Hadoop workloads was one of the initial motivations for moving Hadoop to the cloud. Elasticity is a central consideration when evaluating HaaS providers, in particular, the degree to which the service handles changing demands for compute and storage without reconfiguration and manual intervention.

On the storage side, does the storage automatically expand and contract HDFS capacity as new data is added and old data is deleted? Does the HDFS automatically accommodate intermediate outputs of jobs, which can be substantial at times? On the compute side, does the solution automatically support ad hoc analysis by data scientists, which can be unpredictable in its arrival and substantial in its resource requirements?

5. Non-stop operations

Large, complex, distributed and parallel systems present more challenging operating conditions than one finds in non-parallel applications, including:

  • The need to restart failed sub-processes of a large job to avoid restarting the entire job.
  • Jobs that starve for resources and finish late, or not at all, even when resources are available.
  • Deadlock, which occurs when one process must wait for a resource held by another process 
while the second process simultaneously waits for a resource held by the first process.

Non-stop Hadoop operations address problems unique to the Hadoop environment. In-house and RIY environments are especially prone to problems with maintenance of non-stop operations, since they require deep Hadoop expertise and tooling.

6. Ecosystem Component Availability and Version Tracking 


As big data adoption accelerates throughout the industry, so do the tools found within the Hadoop ecosystem. In-memory analysis engines, low-latency SQL for Hadoop, machine learning libraries built on MapReduce, and the increasing numbers of scheduling, workflow and data governance tools are examples. Environments that are up-to-date with the latest Hadoop software releases, allow customers to take advantage of the latest developments.

7. Cost Transparency 


The problem of anticipating and managing cost is more pronounced in complex and dynamic big data environments where compute and storage costs are difficult to understand and multiple variables, such as the types of instances and sizes of virtual machines, must be considered.

When pricing models directly tie to Hadoop units of work and capacity, specifically YARN job units and HDFS storage, costs are quickly understood and controlled.

8. Holistic Job Tuning 


Experienced big data practitioners continuously monitor jobs to collect information on how to tune workloads. The run time of complex jobs can be cut by as much as 50% through this iterative design- run-monitor cycle, which is especially important for jobs that run repeatedly.

Job tuning in Hadoop environments involves the entire service stack, as shown below. Problems at any layer of the stack lead to inefficiencies in how jobs run. For example, the application logic may use one of the elements in the application framework ineffectively or the root cause of a performance problem may be due to a misconfiguration in the Hadoop/YARN layer.

State-of-the-art monitoring developed with this holistic job tuning in mind can lead to increased efficiency.

 

data_pipeline_hadoop

9. Team Experience

A final criterion to consider when evaluating HaaS providers is the experience of the team behind the service. Building any large scale Hadoop environment is a complex and difficult task. Maintaining such an environment in the face of widely varying workloads across multiple customers adds to the difficulty. The experience and background of the support staff can be an important aspect of an effective service.

Before you set out to choose a Hadoop as a Service offering, define a clear set of the necessary qualities and capabilities you are looking for. Taking these nine criteria into consideration can help you select the best-suited provider for your organization.

Tags:

Benefits of the Private Cloud

Selecting a cloud solution for your organization can be overwhelming, as there are quite a few options to choose from – private, public, hybrid and community. Which is best for you?

As many organizations contemplate migrating to the cloud, their biggest concerns tend to be control, security and location of their data. Are you in the healthcare industry, tasked with storing confidential patient information? Are you part of a federal organization that deals with data that must be kept secure? Depending on your organization and the type of data you’re considering storing in the cloud, a private cloud solution might be your best bet. Continue reading…

Tags: ,

Summer House Cleaning Tips for Your Cloud

Summer is the perfect time to take a step back and reevaluate new ways to fine-tune your cloud infrastructure. Whether your environment is public, private or hybrid, there are a number of data center “house cleaning” tips that can significantly optimize computing resource performance. Not only can these guidelines help you refresh your own infrastructure, but they also provide some key guidelines for your vendor partner relationships too. In the end, you will benefit from a much more cost-effective, secure and efficient cloud infrastructure.1more–>

Following is a list of strategies and tips to help your organization keep things running smoothly not just this spring, but year-round.

Get a Pulse on Your SLAs

When you negotiated your service level agreements (SLAs1 you likely knew the ins and outs of every performance metrics. But when was the last time you revisited your SLAs to make sure they were still relevant, being met, or require renegotiation? Take the time to make sure your agreements still meet your services level expectations. For example, if you have a hybrid environment where the public cloud partner has experienced service disruptions or unscheduled downtime, there may be an opportunity to revisit and adjust the service guarantees – and pricing.

Conduct an Inventory “Refresh”

Enterprises and data center operators are quickly adopting new software and hardware technologies to make operations more flexible, efficient, scalable – and even green. That makes spring a good time to take a moment and inventory that process and evaluate the state of your current investments. Are there infrastructure elements that can be leveraged in ongoing upgrades? Are all current technologies supported by their vendors should they fail? Are there opportunities for consolidation that can reduce complexity? Should inefficient or power-hungry technologies that need to be scrapped completely? Once you have inventoried your environment, ensure that you’ve built a secure migration path to ensure seamless delivery through transitions.

Drop Old Security Bad Habits

Use this time to remind all employees and partners that security is everyone’s problem. I recommend that all staff – not just data center employees – undergo annual security awareness training. It’s easy to fall into complacency but as recent data breaches have shown, the cost of failing to be compliant and secure can be astronomical. Develop in-depth best practice guidelines and a training checklist to reinforce principles on a continual basis.

Reevaluate Your Overall Cloud Strategy

Spring is a good time to reevaluate your cloud strategy as a whole. Is it achieving the goals you set out to accomplish with your IT transformation project? With the ongoing cloud price wars, I’ve seen how many organizations have jumped on the opportunity to adopt inexpensive public cloud solutions and then regretted their approach. Although the appeal of “cheap” commodity cloud is appealing, take a moment to evaluate the performance of this strategy. You may find out that added customization is needed within your environment to enhance performance and meet your needs. For example, if you are using the public cloud for Big Data applications, you may find that without a purpose-built platform – such as Hadoop-as-a-Service (Haas1 – you could be sacrificing a significant amount of processing speed. Remember it’s never too late to change course.

 

FedRAMP 2.0: Moving Beyond Compliance and Embracing Risk Management

Compliance as we know it today is evolving to address the demands of an increasingly volatile threat landscape where attacks can come from anywhere — inside or outside the corporate network.

The old school vision of checking off requirement boxes annually and calling it compliance is a thing of the past. Today’s IT environment is so dynamic and fast-moving that “point-in-time” security and privacy standards are often outdated by the time organizations comply. 1more–>Not to mention the fact that recent high-profile data breaches are forcing us to confront the uncomfortable reality that compliant doesn’t always equal secure. As a result, government and industry are moving toward more iterative, ongoing, real-time monitoring and management of their organizations’ information security and compliance posture.

That’s the goal, and it’s also the underlying philosophy of the Federal Risk and Authorization Management Program (FedRAMP1, the federal government’s standard approach for conducting security assessments of cloud services. The goal is to replace varied and duplicative procedures across agencies.

As of June 6, existing cloud services used by agencies and newly acquired services must comply with security standards outlined by FedRAMP. The FedRAMP Program Management Office published an updated FedRAMP security control baseline and new templates June 6 to reflect changes in revision 4 of the National Institute of Standards and Technology Special Publication 800-53, which was released in April 2013.

The government is reportedly working on the next set of security controls for FedRAMP 2.0, which will be based in part on NIST 800-53 version 4. Designed to deliver a comprehensive set of controls, the latest addition to the security controls catalog, was developed by NIST, the Department of Defense, the Intelligence Community, and the Committee on National Security Systems, with the aim of meeting existing and emerging threats to government systems.

Agencies and industry are facing more sophisticated and frequent attacks from a wide array of criminal organizations, non-state groups and foreign governments. This has put the spotlight on the need to protect against advance persistent threats — where attackers use multiple phases to break into a network, avoid detection, and then mine valuable information over the long term.

Last year, the Pentagon acknowledged in its annual report on China that the Chinese government and military had targeted U.S. government computers to collect intelligence on the U.S. diplomatic, economic and defense sectors. A few months later, revelations that Edward Snowden, a former government contractor, leaked classified National Security Agency documents to the news media, highlighted the need to protect against the insider threat.

The NIST controls focus on emerging disciplines and disruptive technologies such as mobile and cloud computing, application security, information systems resiliency, insider threat, supply chain security, and advanced persistent threats.

The defense-in-depth capabilities that these security controls provide will help strengthen organizations’ information systems and the environments in which those systems operate against emerging cyberattacks.

The government’s adoption of a “Build It Right” strategy coupled with a variety of security controls for continuous monitoring is also a positive move. Continuous monitoring will give agencies near real-time information that is vital for senior management to make ongoing risk-based decisions that will impact business operations. Continuous monitoring is part of the risk management process of FedRAMP, and is a requirement for all cloud service providers to maintain an Authority to Operate.

The Defense Department’s decision to roll the DOD Information Assurance Certification and Accreditation Process into the NIST standards is another positive move that will ensure a common set of standards across DOD and civilian agencies for moderate level security. Of course, DOD will want to put their fingerprints on the NIST standards to enhance them – and for higher security classifications. But the adherence to NIST standards is a good thing going forward for DOD.

The bottom line is information security cannot be an afterthought when building applications and systems. If it is, it’s too late. Similarly, if you are thinking about being compliant with all of the required government regulations after you build something, it is too late.  Security and compliance must be baked in from inception.

What’s needed is a defense- in-depth approach that can defend against and respond to inside and outside threats. The price for being able to share data securely is being perpetually vigilant.

Clash of Clouds: May the Best (not cheapest1 Cloud Win

When the dust clouds settle from the private versus public cloud debate, the likely winner will be the hybrid cloud.

There is no easy, universal approach to achieving the benefits of the cloud.

Many organizations are attracted by the lower initial cost of the public cloud and think they can easily plug in an out-of-the box solution. 1more–>But as I mentioned in a recent chat with the folks at Data Center Journal, adopting a one-size-fits-all public cloud strategy comes with significant trade-offs – like security, compliance and performance – that must be carefully considered. This is why many organizations still choose to keep some of their most mission-critical applications within their own four walls via a private cloud.

The ease-of-use, agility, and pay-as-you-go model of the public cloud does have its advantages. Organizations can reduce their IT budgets because they don’t have to buy physical hardware since their data is stored on virtual servers hosted by the cloud provider. And they can ramp-up and scale down resources as they need paying for only what they use.

On the other hand, they might not have full visibility and control over their data since the cloud provider manages it. As a result, they could run into compliance issues if they don’t know where data is stored. If they are storing and transferring large amounts of data, they could experience performance degradation since they share computing resources with other tenants on the network, and we’ve seen many examples of major public cloud providers experiencing system outages.

What’s more, public clouds are made up of commodity components that tend to generalize the service. Take for instance, Hadoop, the open-source software framework for storage and large-scale processing of data sets. Hadoop can run generically on say Amazon Web Services’ Elastic Compute Cloud, and it will work just fine, but the servers they use are generalized to run web applications, databases – almost anything. To maximize Hadoop performance, organizations must customize the hardware for the right mix of compute resources as well as optimize the network for Hadoop.

Private clouds offer greater control, better performance, deeper compliance and security, and more customization than public clouds in most cases, because they are typically hosted on a private platform within an organization’s data center. Hardware, network, and storage performance can be specified and customized in the private cloud because it’s owned by the organization.

Companies need to take the time to find the right mix of solutions for their specific needs – an extra step that actually saves significant money and headache in the long run. Many companies are turning to hybrid solutions that allow for the best of both worlds: the security, control, and customization of a private cloud, combined with the ability to scale dynamically using public cloud resources when necessary.

As organizations consider moving more critical applications into the mix of cloud solutions, they will want to take into account regulatory and compliance issues, the required service levels they want, usage patterns for the different workloads and how must integration is needed between the applications in the cloud and their enterprise systems.

 

Tags: , , ,

Cloud Wars – An Interview with CTO Brent Bensten

What’s your opinion on the recent cloud wars debate? Carpathia’s CTO, Brent Bensten, sat down with The Data Center Journal: Industry Outlook to talk about the future of cloud computing and what it means for the market. Take a look at the following excerpt from their discussion:1more–>

Industry Outlook: The cloud wars are heating up as vendors slash prices. Can you explain the dynamic between players like Amazon Web Services, Google, Microsoft and others?

Brent Bensten: The economics of networking are changing for companies around the globe. For organizations turning to the cloud to accelerate innovation, provide a path to enterprise services and increase efficiencies, the price of public cloud is dropping—rapidly. In March, Google slashed the price of its cloud services in all regions by 32 percent. A day later Amazon Web Services responded by instituting its 42nd price cut for the Amazon cloud. Not to be left out, Microsoft lowered prices for its Azure public cloud yet again shortly thereafter. In May, CenturyLink joined the fray with cuts of its own.

This price war is much larger than just claiming immediate market-share victory; it’s about the future of computing and the path (and associated dollars1 businesses will take with their computing infrastructure in the future. The implications of this battle are far reaching not just for Amazon Web Services, Google and Microsoft, but also competitors like EMC, HP, IBM, an entire ecosystem of infrastructure-as-a-service (IaaS1 players, and companies that deliver cloud-reliant services, like as Netflix and Spotify.

Want to see what else Brent had to say? In his interview, he covers the answers to questions such as “What, if any, are the service and security tradeoffs for customers as prices fall?” and “What do the cloud wars mean for organizations considering public, private or hybrid cloud projects?” Check out the full article here.

Tags: , , ,

BYOD Needs a Traffic Cop

The “Bring Your Own Device” movement isn’t going away, so enterprises must embrace it, and manage it properly.

This requires a unified approach that allows for the implementation of policies and tools that promote both productivity and security as more and more employees bring their own mobile devices to the workplace and use them to work from home. Any mobile strategy — whether it involves company or 1more–>government-issued devices, workers’ devices or a combination of both — has to incorporate mobile data and device management capabilities and services that can function as a “traffic cop” to enforce security policies for networks and the devices that connect to them.

The concept of a traffic cop enforcing security policies and managing devices actually comes from the Defense Information Systems Agency, which issued a request for proposal two years ago, and has since selected a vendor to build mobile device management capability and a mobile application store for the DoD.

Other agencies have jumped on the MDM bandwagon. The Secret Service issued a request for information last year seeking industry input for a MDM solution and app store to manage employee smartphones and tablets running a variety of operating systems. Additionally, The General Services Administration (GSA1 introduced the Managed Mobility Program, which includes tools from authorized vendors that can assist agencies to remotely manage mobile devices.

A unified MDM architecture secures, monitors, manages, and supports accredited mobile devices– such as smartphones, laptops, tablets — across an enterprise, providing malware detection and policy control of the devices. Administrators can also update software and push out security patches as well as remotely wipe data from lost or stolen systems.  Other functionalities include remote device configuration management and asset and property management capabilities to protect against data compromise.

MDM tools and services have to meet unique government requirements outlined in the GSA’s Mobile Device Management and Mobile Application Management (MDM/MAM1 document as well as adhere to Federal Information Processing Standard validations and Federal Information Security Management Act compliance.

BYOD management and security is a daunting task for IT managers, encompassing a whole range of issues from device ownership — how do organizations secure and separate corporate/government and personal data — to how to leverage existing tools such as Microsoft Active Directory or Exchange into a MDM strategy. In future blogs we will take a more in-depth look at BYOD issues and how to transition to a secure, hosted MDM solution.

Tags: ,