Wednesday, July 18, 2018

Could Budget Sweeps Fix Your Cybersecurity Problem?


A recent roundtable discussion in Washington, DC with Federal IT and Cyber leaders focused on the business drivers, challenges and evolving strategies around cybersecurity in government.  After an opening presentation by Jim Quinn, the lead systems engineer for the Continuous Diagnostics and Mitigation program at the Department of Homeland Security, the discussion highlighted the need for data security. Key takeaways included:
  • A new emphasis on data-level security across government that puts security controls closer to the data itself, rather than focusing on the perimeter.
  • The urgency around data security is increasing, with 71 percent of agencies having been breached, which is a threefold increase from three years ago.
  • Need to deal with an expanding requirement to add more and more capabilities to mission systems with the understanding that protecting data is part of the mission.
  • Agencies that only focus their time, energy and budget on meeting various mandates are having trouble keeping up with evolving cyber threats.
  • While agencies have much flexibility in how they acquire, manage and deliver information and services, they are still responsible for protecting their data. Agencies must, therefore, approach data security at the enterprise level.
  • Data security is a matter of law. 44 U.S.C., Sec. 3542 directs agencies to ensure the confidentiality, integrity, and availability of government data.

As I’ve written many times before, organizations need to focus on how to transition to a hybrid IT future.  The overall information technology marketplace is also undergoing these dramatic shifts toward data-centric security.  Data management has moved from the management of structured data into an environment where real-time analysis and reporting of streaming data is essential. 

International commerce is also entering an environment of stricter data management regulations and national data sovereignty laws that, if violated, introduce the possibility of punishing remedies and fines. This rapid progression has also driven a massive change in information technology services. Cloud and managed service providers are meeting this need through the innovative creation and deployment of API accessible, immediately consumable, data manipulation services. Enterprise IT organizations have shown themselves unable to keep pace with the blistering increase in the number and breadth of broader IT marketplace services.  It’s also not cost-effective or even desirable for them to try.

With the recent focus on data-level security and year-end budget sweeps around the corner, shouldn’t your agency be looking at how to better store and protect its data? Mandates around IT Modernization and Cloud Computing aren’t going away soon either.  With cloud and managed service provider data storage solutions so accessible, your current on-premise solution may be hurting your mission in many ways including:
  • High CAPEX driven by significant upfront equipment costs lead to poor ROIs with long payback periods;
  • High OPEX characterized by recurring power, cooling and rack space expenses;
  • Expensive monthly hardware and software maintenance and support fees;
  • Excessive system administration cost and complexity all lead to high ongoing operations expenses;
  • Obsolescence concerns caused by storage vendors that regularly retire products and discontinue support plans, often subjecting customers to costly and disruptive upgrades;
  • High mission operational risk due to an inability to replicate live data to a secondary data center; and
  • Complex legacy storage solutions that are difficult to configure and administer.

Take a minute to think about this. Maybe those year-end sweep dollars would be better spent on improving your mission readiness with a cloud storage solution like Wasabi. Wasabi is hot cloud storage. It’s being used as a way to archive data, or used as 2nd copy, because the price for storage on Wasabi is so low and they’ve made cloud storage prices predictable with no egress charges.. It’s also secure with 11 nines of durability. Wasabi offers immutability so your data is protected from most common causes of data loss.  Finally Wasabi is high-performing; 6 times faster than its competitors. It’s easy to test by signing up for a free trial at wasabi.com





This post was brought to you by Wasabi Hot Storage 



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Monday, July 16, 2018

Cloud Migration Part One: An Overview



Cloud Migration Part One: An Overview

Business is all about efficiency and effectiveness.  In today’s world, however, those twin goals almost always lead to cloud migration.  This anecdotal observation is supported by Gartner which sees worldwide public cloud service revenue jumping to over $300B by 2021.


Independent research from Market and Markets echoes this expectation in its cloud migration services forecast which sees this market subset growing from $3.17B in 2017 to $9.47B by 2022, at a Compound Annual Growth Rate (CAGR) of 24.5%.  With migration being such a high priority activity, many organizations are looking for the most efficient and effective cloud migration strategy.
In addressing this query from thousands of customers worldwide, IBM Global Technology Services (GTS) has migrated applications in just about every industry.  These migrations have targeted global service providers like AWS and Azure, as well as regional and local ones.  The best practices GTS has honed through these experiences include:
  • How to understand and classify business critical data;
  • Executing an efficient process for screening and selecting applications for cloud migration;
  • Following a methodology for discovering the most effective strategy for each application migration; and
  •  Selection of the most cost-effective and industry aligned cloud service provider(s).
Experience has also shown that businesses are in different stages of their “Journey to the Cloud.”  These initial stages often include:
  • Planning and designing common foundational infrastructure services;
  • Pattern and Template based automated deployments for public clouds;
  • Migrating workloads to the most appropriate cloud through a standardized, repeatable tool driven framework;
  • Monitor and Manage workloads using standardized tools and process aligned to cloud platforms; and
  • Governing, tracking, managing and optimizing cloud usage and spend.


These common best practices and initial stages are common to the most successful cloud migration projects.

This series, presented in four weekly installments, lays out the details of how leading organizations have transformed themselves through cloud migration and how GTS has embedded industry best practices into their hybrid cloud service delivery model.  “Part Two: Classifying Organizational Data,” covers the identification of key business processes and their associated data types.  The article also the outlines the importance of identifying process data owners and the required security controls for each data type.  “Part Three: Application Screening,” looks at determining the most appropriate target deployment environment, each application’s business benefit, key performance indicator options and target return on investment.  That segment also shows how to select the most appropriate migration strategy for each application.  “Part Four: Executing The Migration” presents experience informed guidance on how to effectively and efficiently execute a cloud application migration strategy.  This segment includes selecting the most appropriate cloud service provider and technology services, reviewing and verifying available data security controls and suggested steps for SLA negotiations.  It also addresses business/mission model alignment, organizational change management, and migration project planning.

The series also presents the three most common cloud adoption paths for business, namely:
  • Innovation: Building cloud-native applications using the DevOps model;
  • Agility: Modernizing and migrating legacy applications and infrastructure to a native cloud model; and
  • Stability: Managing workloads and infrastructure in clouds and on premises




This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Thursday, July 12, 2018

A Personal Technology for Good Redux: Call for Code


In 2013 I had the opportunity to manage a $2M demonstration of how cloud computing could be used to support natural disasters. In that NCOIC Geospatial Community Cloud (GCC) demonstration, multiple regional clouds were managed using a cloud brokerage platform in a simulated response to a massive earthquake. Modeled after the disaster that struck Haiti in 2010, the project showed how interoperability and movement of data in an open, cloud-based infrastructure could be used to deliver a global, multidisciplinary disaster response capability. The relief simulation also showed government leaders how data sources from a variety of organizations coupled with cloud technology could improve capability and effectiveness while reducing cost, time and risk. These were critical lessons that, back then, I looked forward to maturing.
Now it’s 2018, and technology advances have continued to revolutionize our society.  The democratization of data and information have since changed our lives in many unexpected ways.  A sad fact though is that, although some government leaders have tried, our global society has not yet found a way to institutionalize the lessons we learned back then.  While cloud computing continues to upend industry norms, the disaster response community is still stuck with antiquated processes and technologies.  This unfortunate reality is but one reason why I have decided to put my energy behind the “Call for Code” initiative.

IBM is the founding member of the Call for Code Global Initiative, which was created by David Clark, a renowned leader in cause-related initiatives. David Clark's work includes iconic people and humanitarian organizations, such as President Nelson Mandela, Muhammad Ali, Prince, the United Nations, Amnesty International, and The Anne Frank Center.  The Call for Code Global Challenge is designed to leverage technology for good by asking software developers to create solutions that significantly improve preparedness for natural disasters and relief. This competition encourages developers who want to pay their skills forward for a specific mission to alleviate human suffering.  A broad cross-section of experts humanitarian and international organizations are supporting this initiative which includes the United Nations Human Rights Office and the American Red Cross’
GRAMMY nominatedsongtress Andra Day
International team. They will also benefit from the inaugural Call for Code Global Prize Event & Concert on October 13th, United Nations International Day for Disaster Reduction.  The initiative also boasts some star power with GRAMMY-nominated singer and human rights advocate Andra Day, whose 2015 global smash hit song "Rise Up" quickly became the voice for the voiceless, leading a celebrity coalition.


Another motivation for joining this initiative was a recent video from Brad Kieserman, Red Cross Vice President of Disaster Operations and Logistics.  In that video, he highlighted the importance of visualizing data in a way that helped responders make better decisions about the movement of resources during a disaster.  His vision of using technology to address unmet disaster need for me pointed out the value of the cloud as the application delivery platform and data repository.  The same value proposition we proved back in 2013.

Brad Kieserman, Red Cross Vice President of Disaster Operations and Logistics
Over the next few months I will be blogging, tweeting, podcasting and vlogging on the many “Call for Code” activities and events.  Please join me in supporting this effort by retweeting, liking and reposting this content to your friends. 

Let’s all work together to help each other when disaster strikes.

This post was brought to you by IBM Developerworks. For more content like this, visit https://developer.ibm.com/code/



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)