Saturday, October 14, 2017

IBM - The Power of Cloud Brokerage

Hybrid cloud adoption is now mainstream and you are making decisions every day about how to transform application and infrastructure architectures, service delivery, DevOps, production operations and governance. With Cloud and Systems Services you rethink how technology can be used to give you more power than ever before.

Cloud and Systems Services, part of IBM Services Platform with Watson, are infused with automation and cognition so you stay ahead of the needs of your every-changing business.







To lean more or to schedule 30 mins to discuss your Enterprise IT issues, click here: https://ibm.co/2g7lHR3



This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond.







Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)







Friday, September 29, 2017

More SMB Love Needed



In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace:
  • Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system.
  • 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
  • Recovery times for SMB are four times faster for businesses using cloud computing when compared to those not utilizing cloud services.
  • For SMB, energy use and carbon emissions could be cut by 90% by using cloud computing, saving the environment and energy costs.

These advantages show a strong indication that SMB information technology should be dominated by the adoption of cloud computing services.  Although one of the most prominent of these cloud services is Microsoft’s Office 365 (O365), a recent survey cited by CIO.com suggests that 83% of U.S. small and medium businesses (SMBs) have yet to use any form of O365.  If cloud services can deliver such remarkable improvements, why are SMBs holding back?

According to the survey, part of the reason is that SMBs often lack the required internal resources needed to analyze the cloud migration opportunity.  This type of analysis often requires the testing of multiple cloud-based business and productivity services as well as more focused attention on data protection capabilities.  Many SMB executives see cloud computing as nothing but marketing hype and are more focused on running their businesses.  Cloud services may also be perceived as being very confusing, technically overwhelming, and even frightening.  Another key technical challenge is dealing with a more sophisticated networking environment that may require virtual private network (VPN) management and remote infrastructure access.

The networking challenge is further exacerbated by the requirement to support a distributed mobile workforce with secure mobile device access to company network resources.  NETGEAR is making an impressive bid to address this challenge by their recent release of a new line of small business switches, access points, and NAS devices equipped for native cloud management via a new mobile application.  The app, called Insight, is designed to let administrators or unskilled end users discover and configure multiple wired and wireless network devices.  The users can then monitor and manage these network resources remotely through an intuitive touchscreen interface.  Insight is designed to fill a critical gap in the networking market for simple SMB solutions that provide robust functionality.


Switching from software or CPU license-based pricing to the subscription-based utilization models offered by cloud service providers can also require an SMB to conduct a careful economic analysis of the change.  This change can potentially divert finance and IT staff from their core jobs. The reality is that most cloud services aren’t designed for SMB consumption.  Small businesses are therefore likely postponing cloud migration because they don’t know where to start or don’t possess the internal resources to manage through the transition.

This small business industry challenge is bound to become harder. According to International Data Corporation (IDC), the small and medium business spending on IT hardware, software, and services, including business services,  is expected to increase at a compound annual growth rate (CAGR) of 4.2%, reaching $668 billion in 2020.

As SMB cloud adoption grows, the need for more cloud transition support for the SMB marketplace will also continue to grow.  As a historically underserved market, more SMB tailored cloud services and cloud adoption support are desperately needed.  Unfortunately, the SMB market is typically seen as an afterthought by enterprise vendors, and small business solutions are designed as dumbed down versions of the enterprise solutions, let’s hope that more companies like NETGEAR will wake up and serve this clear and growing SMB marketplace need.

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Monday, September 4, 2017

ATMs Are IT Too!


That world of homogenous IT technology managed entirely by the internal IT organization has long disappeared.  Operations today require efficient and global management of technologically heterogeneous environments. The challenges and mistakes organizations make when tackling this important task include:
  • Operational disconnects caused by ineffective internal communications;
  • Resource contention when multiple, independently developed project plans compete;
  • Incompatible technical documentation; and
  • Inconsistent communications with vendors.
A case in point is the finance industry which has endured some rather unique pains in this area, especially when it comes to ATM Fleet Management. According to Diebold Nixdorf, a world leader in connected commerce, this problem has been caused by three major trends that have changed the nature of ATM network management.
The first, and broadest driver of these changes has been the rapid adoption of newer and more sophisticated technology. Some reports cite that in 2014, up to 95% of the world’s ATMs were running Windows XP. That year, the entire industry was basically forced to transition to Windows 7 and this was when some banks were still using OS/2!
“These more sophisticated systems, requiring updates, patches, and support in real-time, along with software and hardware that can operate nimbly in an agnostic ecosystem. And as more and more transactions are migrated to self-service terminals, the devices must advance in complexity, too.”
Security challenges, the second key trend, are also morphing daily as threats become more and more diverse. Specific problems include physical security of the cash inside the terminal, malware threats to software and the use of data skimming devices. As banks expand their self-service networks, competition around their ability to deliver greater functionality and more complex transactions within an even tighter personally identifiable information regulatory environment is daunting.
The final trend is around management and overhead. As the traditional focus of IT support groups has changed from PCs, firewalls, routers towards the administration of an extensive network of remote self-service terminals, the scope of the required core competencies has changed tremendously. These teams must now deal with multi-vendor hardware, software, security, and services. To deal with these tectonic shifts, financial institutions are now looking to partner with technology services companies.
In this strategy shift, they are looking for a provider that brings broad multi-vendor management skills and analytics-based, proactive technical support. Additional criteria for selecting a multi-vendor management partner include:
  • Global presence with the ability to provide on-site engineering support to any ATM site;
  • Demonstrated continuity of support as exhibited by an ability to dispatch the same customer engineers on most occasions;
  • Customer engineers with proven and demonstrable experience with the same type of installation and configuration;
  • Support organizations with the breadth and depth of resources necessary to deliver high-quality support with minimal service disruption; and
  • A global logistics infrastructure capable of providing the timely delivery of parts from any vendor, if required.
IBM has proven to be a major player in this space. Their ATM and branch services support provides a predictive maintenance solution that uses advanced analytics to identify potential concerns. They then work with financial organization’s IT teams to schedule proactive support services. This proactive approach ensures proper intervention before customer service is disrupted. As a proven, global provider of multi-vendor service support, IBM can be your single agnostic vendor supporting your multi-vendor ATM environment. If your team is in need of a multi-vendor support partner, consider IBM.

This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)


Tuesday, August 29, 2017

Digital Transformation Asset Management



Today’s businesses run in the virtual world. From virtual machines to chatbots to Bitcoin, physical has become last century’s modus operandi.  Dealing with this type of change in business even has its own buzzword – Digital Transformation.  From an information technology operations point of view, this has been manifested by organizations increasingly placing applications, virtual servers, storage platforms, networks, managed services and other assets in multiple cloud environments.  Managing these virtual assets can be much more challenging than it was with traditional physical assets in your data center.  Cost management and control are also vastly different than the physical asset equivalent.  Challenges abound around tracking and evaluating cloud investments, managing their costs and increasing their efficiency.  Managers need to track cloud spending and usage, compare costs with budgets and obtain actionable insights that help set appropriate governance policies.

The cloud computing operational expenditure (OPEX) model demands a holistic management approach capable of monitoring and taking action across a heterogeneous environment.  This situation is bound to contain cloud services from multiple vendors and managed service providers.  Enterprises also need to manage services from a consumption point of view. This viewpoint looks at the service from the particular application down to the specific IT service resources involved, such as storage or a database. Key goals enterprises need to strive for to be successful in this new model include:


  • Obtaining ongoing visibility into true-life cloud inventory;
  • Viewing current and projected costs versus industry benchmarks;
  • Establishing and enforcing governance control points using financial and technical policies;
  • Receiving and proactively responding to cloud cost and operational variances and deviations;
  • Gaining operational advantages through advanced analytics and cognitive computing capabilities;
  • Simulating changes to inventory, spend goals and operational priorities before committing;
  • Managing policies through asset tagging across providers and provider services; and
  • Identifying and notifying senior managers about waste and opportunities for cost savings.
Accomplishing these goals across a hybrid IT environment will also require timely, accurate and consistent information delivery to the organizations, CIO, CFO, IT Financial Controller and IT Infrastructure and Operations Managers.  Ideally, this information would be delivered via a “single pane of glass” dashboard.

One path towards gaining these capabilities would be through the use of a cloud services brokerage
platform like IBM® Cloud Brokerage Managed Services – Cost and Asset Management. This “plug and play” service can assist in the management of spending and assets across hybrid clouds by visualizing data that provides focus on asset performance.  Through the use of predictive analytics, it can also provide insight-based recommendations that help in the prioritization of changes according to their expected level of impact.  Analytics enables an ability to recalibrate cost by comparing planned versus actual operational expenditures.  The built-in cloud service provider catalog, pricing, and matching engines can also help organizations find alternative providers more easily.  Using IBM Watson® cognitive capabilities, IBM Cloud Brokerage Managed Services – Cost and Asset Management will also highlight cloud best practices and expected results based on IBM’s rich knowledge base of cross-industry cloud transition experience.

Operating a business from a virtual IT platform is different.  That is why advanced cost and asset management skills, capabilities and tools are needed.  According to Gartner, more than US$1 trillion in IT spending will be directly or indirectly affected by the shift to cloud during the next five years. This makes cloud computing one of the most disruptive forces of IT spending since the early days of the digital age.  You and your organization can be ready for these tectonic changes by implementing the straightforward five-step process supported by IBM Cloud Service Brokerage capabilities:


  1. Establish governance thresholds and policies for services;
  2. Connect the advanced management platform across all cloud service accounts;
  3. Track the costs of the services, including recurring and usage-based costs;
  4. Enforce compliance on the costs and asset usage using the purpose-built cost analytics engines; and
  5. Simulate and optimize the control and compliance actions and better control your costs.



This post was brought to you by IBM Global Technology Services. For more content like this, visit IT Biz Advisor
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)

Sunday, July 30, 2017

The Game of Clouds 2017

The AWS Marketplace is growing at breakneck speed, with 40% more listings than last year! This and more insights were revealed when CloudEndure used their custom tool to quickly scan the over 6,000 products available on AWS Marketplace. The top offerings are highlighted in the image below but additional detail is available on their blog


"So whether you are a Stark, a Targaryen, or even a Lannister, the Game of Clouds map will help you attain the crown of AWS cloud computing perfection."




( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Friday, July 14, 2017

Managing Your Hybrid Cloud

Photo credit: Shutterstock

Runaway cloud computing cost may be causing an information technology industry crisis.  Expanding requirements, extended transition schedules and misleading marketplace hype have made “Transformation” a dirty word.  Questions about how to manage cost variances and deviations with assets and cost across different suppliers abound. A  recent Cloud Tech article explained that while public cloud offers considerable cost savings in comparison to private or on-premises based alternatives, there may also be significant hidden costs. Operational features like auto-scaling can cause costs to soar in line with demand for resources, making predicting costs difficult and budgeting even harder. There is also an acute need for a holistic and heterogeneous system that can track the costs of cloud services from the point of consumption (e.g., an application or business unit) down to the resources involved (e.g., storage or compute service).
Sitting at the apex of all of these issues is the CFO or corporate Vice President of Finance. As the key budget manager for most organizations, it is where many of the key financial decisions are made. This is also where the spectrum of IT cost responsibility extends from the pure financial analytics tasks of:
  • Optimization;
  • Forecasting and projection; and
  • Financial reporting
To the pedestrian but crucial accounting tasks like:
  • Show-backs and charge-backs
  • Charge reconciliation; and
  • Budgeting policy management
The most prevalent cause of these financial problems is a failure to keep track of virtual assets in the cloud.  Many companies have lost complete visibility and control of cloud computing cost simply because they failed to tag and track these assets.  Unfortunately, this error is typically realized after hundreds or even thousands of cloud based assets have been instantiated.
Experts have also outlined a five-step process that help enterprises bring control and governance to hybrid cloud IT cost.
Step 1: Establish governance thresholds and policies for services
Step 2: Access your cloud service provisioning accounts
Step 3: Track the costs of the services, including recurring and usage-based costs
Step 4: Enforce compliance on the costs and asset usage using the purpose built cost analytics engines; initiate and track changes
Step 5: Simulate and optimize the control and compliance actions and better control your costs
Managing spend and assets across hybrid clouds also requires the availability of actionable data. This will help the CFO focus on which assets are performing as expected and which are not. Predictive analytics and insight-based recommendations can also help to drive the prioritization of changes that can have the most effective impact.
These sort of challenges can certainly be acute but the solution for helping organizations gain control of these issues will typically include holistic hybrid cloud management. In fact, financial organizations are just now realizing their critical role in managing the operational expenditure model embraced by cloud computing. Services specifically designed to address the financial management aspects of cloud metering, billing, workload management and service provisioning policies are just now hitting the marketplace.
One of these leading financial management services is provided by IBM. Their newly launched Cost and Asset Management application helps companies address escalating cloud costs and complexity while offering guidance into the next steps of hybrid cloud transformation. Through the use of predictive analytics to monitor and provide recommendations on a single dashboard, this service can provide finance and IT on one system of reference for hybrid cloud governance. This particular service can establish and enforce governance control points using financial and technical policies. Its ability to easily combine asset tags with policies can help the CFO identify and respond to financial variances before they become problems. Through the innovative use of Watson Cognitive services, this particular application can tap into a years of IBM experience to offer recommendations using built-in advanced analytics and cognitive capabilities. Acting on these suggestions can streamline cloud usage, predict future trends and identify waste.
If your company is currently experiencing these digital transformation challenges, learn more about managing hybrid IT finances at ibm.biz/ExploreCloudBrokerage. Establishing a focus on cloud governance, cost and asset management is a truly essential step towards expanding the operational benefits of hybrid cloud.


This post was brought to you by IBM Global Technology Services. For more content like this, visit IT Biz Advisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Thursday, June 29, 2017

American Airlines Adopts Public Cloud Computing


Did you know that the reservations systems of the biggest carriers mostly run on a specialized IBM operating system known as Transaction Processing Facility (TPF). Designed by IBM in the 1960’s it was designed to process a large numbers of transactions quickly. Although IBM is still updating the code, the last major rewrite was about ten years ago. With all the major technologies changes since then, it’s clear that IBM has already accomplished a herculean task by keeping an application viable for over 50 years!

Just like Americas aging physical infrastructure, the airlines are suffering from years of minimal investment in their information technology. This critical failure has been highlighted by a number of newsworthy incidents including:

·         Delta, April 4, 2017 - Following storms that affected its Atlanta hub, Delta's crew-scheduling systems failed, causing days of operational issues for the airline. Buzzfeed reports that flight staff were left stranded and unable to log in to internal systems. There were reportedly hours-long wait times on the crew-scheduling phone system.

·         United, April 3, 2017 - A problem with a system used by pilots for data reporting and takeoff planning forced United to ground all flights departing from George Bush Intercontinental Airport in Houston for two hours. This is the third time that this system has been blamed for causing operational problems at United. Around 150 flights operated by United or its regionally flying partners out of IAH were delayed on the day, and about 30 were canceled, according to flightaware.com.

·         ExpressJet, March 20, 2017 - A system-wide outage at ExpressJet delayed flights it operates as Delta, United, and American Airlines for hours. The FAA issued a ground-stop at the airline's request, preventing its planes from taking off. On the day, it had 423 delays and 64 cancellations, about a third of its scheduled operations, according to flightaware.com.

·         JetBlue, Feb. 23, 2017 - An outage at JetBlue forced the airline to check in passengers manually in Ft. Lauderdale and Nassau. Passengers were unable to use mobile boarding passes and check-in kiosks

While these incidents can be scary, American Airlines has recently taken a major step towards avoiding such events by migrating a portion of its critical applications to the cloud. In a recent announcement the carrier said that it will be moving it’s its customer-facing mobile app and their global network of check-in kiosks to the IBM Cloud. In addition, other workloads and tools, such as the company’s Cargo customer website, will also be moved to there. In a parallel effort, all of these applications will be rewritten so that they can leverage the IBM Cloud Platform as a Service (PaaS). This will be done using a micro-services architecture, design thinking, agile methodology, DevOps, and lean development.

“In selecting the right cloud partner for American, we wanted to ensure the provider would be a champion of Cloud Foundry and open-source technologies so we don’t get locked down by proprietary solutions” said Daniel Henry, American’s Vice President Customer Technology and Enterprise Architecture. “We also wanted a partner that would offer us the agility to innovate at the organizational and process levels and have deep industry expertise with security at its core. We feel confident that IBM is the right long-term partner to not only provide the public cloud platform, but also enable our delivery transformation.”

This latest announcement demonstrates why cloud computing is the future of just about every industry.  The cost savings, operational improvements, data security and business agility delivered by cloud based According to Patrick Grubbs, IBM's vice president of travel and transportation, American Airlines will also be able to reduce cost by leveraging an inherent cloud computing ability of matching compute resources to the variable requirements that come from seasonal peaks.

This move by American Airline is sure to spur others towards a quicker adoption of cloud computing.  I look forward to the stampede.

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)
 



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Wednesday, May 31, 2017

Crisis Response Using Cloud Computing



Cloud computing is more than servers and storage. In a crisis situation it can actually be a lifesaver. BlackBerry, in fact, has just become the first cloud-based crisis communication service to receive a Federal Risk and Authorization Management Program (FedRAMP) authorization from the United States Government for its AtHoc Alert and AtHoc Connect services. If you’re not familiar with FedRAMP, it is a US government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. The Blackberry certification was sponsored by the US Federal Aviation Administration.

While you may not need a US Government certified solution in an emergency, your organization may really want to consider the benefits of cloud computing for crisis response. From a communications point of view, companies can use cloud based services to quickly and reliably send secure messages to all members of staff, individual employees or specific target groups of people. Smartphone location-mapping functions can also be easily installed and used. One advantage of using application-based software installed on an employee’s smartphone is that it can be switched off when an employee is in a safe-zone, providing a balance between staff privacy and protection. Location data can be invaluable and result in better coordination, a more effective response and faster deployment of resources to those employees deemed to be at risk. 


Using the cloud for secure two-way messaging enables simultaneous access to multiple contact paths which include SMS messaging, emails, VOIP calls, voice-to-text alerts and app notifications. Cloud-based platforms have an advantage over other forms of crisis communication tools because emergency notifications are not only sent out across all available channels and contact paths, but continue to be sent out until an acknowledgement is received from the recipient. Being able to send out notifications and receive responses, all within a few minutes, means businesses can rapidly gain visibility of an incident and react more efficiently to an unfolding situation. Wi-Fi Enabled devices can also be used to keep the communications lines open when more traditional routes are unusable.  


While you’re thinking about your corporation’s crisis response plans, don’t forget about the data. Accessing data through cloud-based services can prevent a rescue effort from turning into a recovery operation. Sources for this life-saving resource include:
  • Data exhaust - information that is passively collected along with the use of digital technology
  • Online activity - encompasses all types of social activity on the Internet such as email, social media and internet search activity
  • Sensing technologies – used mostly to gather information about social behavior and environmental conditions
  •  “Small Data” - data that is 'small' enough for human comprehension and is presented in a volume and format that makes it accessible, informative and actionable
  • Public-related data - census data, birth and death certificates, and other types of personal and socio-economic data
  • Crowd-sourced data - applications that actively involve a wide user base in order to solicit their knowledge about particular topics or events

Can the cloud be of assistance when you’re in a crisis? Cloud-enabled crisis/incident management service from IBM may be just what you need to protect your business. IBM Resiliency Communications as a Service is a high availability, cloud-enabled crisis/incident management service that protects your business by engaging the right people at the right time when an event occurs, through automated mission-critical communications. The service also integrates weather alerts powered by The Weather Company into incident management processes to provide the most accurate early warning of developing weather events and enable proactive response



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Tuesday, May 30, 2017

Cloudy Thinking and Digital Transformation



(Originally posted on the Engility Corporation Blog)

There’s a lot to gain from cloud computing, but success requires a thoughtful and enterprise focused approach. Cloud computing decouples data and information from the infrastructure on which it lies. A process that is a LOT more involved than dragging some folders from your desktop to a shared drive.
Cloud computing as a mission transformation activity, not a technological one.
As an organization moves from local information hosting to the cloud, one of the most important challenges is addressing cloud computing as a mission transformation activity, not a technological one. Cloud computing isn't a new technology. It's a new way of consuming and provisioning information technology services. Adopting cloud computing means paralleling your mission processes, rethinking the economic models and abstracting your applications from the technology stack silos, which are currently the norm.

Interactions and dependencies between mission applications may be more important than the data or application itself.
One of the first lessons we learned supporting customers was that cloud migration shouldn't be planned as an application-by-application movement to a different hosting environment. Cloud adoption is an application portfolio activity. Interactions and dependencies between mission applications may be more important than the data or application itself. That's why upfront screening, analysis and digital infrastructure modeling are so critical. Boeing flew its Dreamliner aircraft designs on a computer before they started to build. Shouldn't we (and our customers) test future IT infrastructure on a computer before moving to the cloud? 




That is the digital transformation approach we recommend to our customers, and we have now built an entire methodology around it called Cloud ASCEND. We formed an alliance with a few select partners: Cloud Security Alliance, Burstorm, Sequoia and IBM. These companies bring tools, lessons and optimizations available from the commercial sector (the technical operations viewpoint). We blend those offerings with the experience we've gained actually transitioning applications to the cloud and the lessons we've learned in the DoD and intelligence community (the secure mission delivery and performance viewpoint).

We knew the Cloud ASCEND digital transformation methodology couldn’t be some static, one-size-fits-all approach we trot out for every customer challenge. Our methodology constantly evolves because the world is always advancing. This is an important realization that all organizations need to internalize. Cloud computing enables rapid employment of new mission processes. It lets mission owners deploy capabilities that they didn't know existed. Cloud ASCEND is agile because effectively delivering the mission requires an agile methodology. 

It lets mission owners deploy capabilities that they didn't know existed.
Getting ready to migrate to the cloud? Consider a digital transformation strategy that delivers information mobility, operational scalability and mission agility. These are the real benefits that make the process worth the effort. Organizations can apply a digital transformation methodology to determine when and how to get started, allowing them to reduce risk, reduce complexity and migrate with confidence. Cloud ASCEND enables a sort of future proofing because digital transformation means thinking today and doing tomorrow.




Cloud Musings



Wednesday, May 17, 2017

Blockchain Business Innovation


Is there more than bitcoin to blockchain?

Absolutely, because today’s blockchain is opening up a path towards the delivery of trusted online services.


To understand this statement, you need to see blockchain as more that it’s more famous bitcoin use case. As a fundamental digital tool, blockchain is a shared, immutable ledger for recording the history of transactions. If used in this fashion, it can enable transactional applications that can have embedded trust, accountability and transparency attributes. Instead of having a Bitcoin blockchain that is reliant on the exchange of cryptocurrencies with anonymous users on a public network, a Business blockchain can provide a permissioned network with known and verified identities. With this kind of transactional visibility, all activities within that network are observable and auditable by every network user. This end-to-end visibility, also known as shared ledgering, can also be linked to business rules and business logic that can drive and enforce trust, openness and integrity across that business network.  Application built, managed and supported through such an environment can now hold a verifiable pedigree with security built right in that can:
  • Prevent anyone - even root users and administrators - from taking control of a system;
  • Deny illicit attempts to change data or applications within the network; and
  • Block unauthorized data access by ensuring encryption keys can never be misappropriated.

From an industry vertical point of view, this approach can:
  • Give financial institutions an ability to settle securities in minutes instead of days;
  • Reduce manufacturer product recalls by sharing production logs with original equipment manufacturers (OEMs) and regulators; and
  • Help businesses of all types more closely manage the flow of goods and related payments with greater speed and less risk.
Innovators within just about any industry can build, run and manage their own business blockchain network. And even if the organization isn’t quite ready to do the heavy lifting, it can consume a blockchain service from companies like IBM.

Ready-made frameworks as also available from the Hyperledger Project, an open source collaborative effort created to advance cross-industry blockchain technologies. Available hyperledger business frameworks include:
  • Sawtooth - a modular platform for building, deploying, and running distributed ledgers that includes a consensus algorithm which targets large distributed validator populations with minimal resource consumption.
  • Iroha - a business blockchain framework designed to be for incorporation into infrastructural projects that require distributed ledger technology.
  • Fabric - a foundation for developing applications or solutions with a modular architecture that allows components, such as consensus and membership services, to be plug-and-play.
  • Burrow - a permissionable smart contract machine that provides a modular blockchain client with a permissioned smart contract interpreter built in part to the specification of the Ethereum Virtual Machine (EVM).

If you’re team is looking to innovate and take a leadership position within your industry, business blockchains may be the perfect enhancement for your business focused application.



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Friday, May 5, 2017

How Quantum computing with DNA storage will affect your health



By Guest Contributor: 
Taran Volckhausen, Contributing Editor at Vector (http://www.indexer.me)

Moore's Law, which states that processing speeds will double every two years as we cram more and more silicon transistors onto chips, has been faltering since the early 2000s when the law started to run up against fundamental limitations presented by the laws of thermodynamicsWhile the chip industry, with Intel leading the charge, has found ways to sidestep the limitations up until now, many are now saying that despite the industry’s best efforts, the stunning gains in processor speeds will not be seen again by the simple application of Moore’s Law. In fact, there is evidence to show that we are reaching the plateau for the number of transistors that will fit on a single chip. Intel has even suggested silicon transistors can only keep getting smaller during the next five years.
As a result, Intel has resorted to other practices to improve processing speeds, such as adding multiple processing cores. However, these new methods are just a temporary solution because computing programs can benefit from multi-processors systems up until a certain point.



RIP Moore’s Law: Where do we go from here?

No doubt, the end of Moore’s Law will certainly present headaches in the immediate future for the technology sector. But is the death of Moore’s Law really all bad news? The fact the situation is stirring heightened interest in quantum computing and other “supercomputer” technology gives us reason to suggest otherwise. Quantum computers, for instance, do not rely on traditional bit processors to operate. Instead, quantum computers make use quantum bits, known as “qubits,” which is a two-state quantum-mechanical system that can process both 1s and 0s at the same time.

The advances in processing speeds made possible by quantum computing would make Moore’s Law look like a caveman’s stone tool. For instance, the Google-funded D-Wave quantum supercomputer is able to outperform traditional computers in processing speeds by a mind-blowing factor of 100-million. With the advantages offered by “quantum supremacy” easy to comprehend, the race is now on between tech-heavyweights such as Google, IBM, Microsoft and Intel to successfully prototype and release the first quantum computer for commercial use. However, due to the “weird” quantum mechanics the technology relies on, there are few barriers to working with and storing data derived from processing with qubits.

Brave new world: Quantum Computing with DNA-based Storage

Basically, the fundamentals of quantum mechanics don’t permit you to store information on the quantum-computing machine itself. While you could convert its data for storage on traditional devices, such as the solid-state hard drive, you would need to process a nearly infinite amount of information, which would require an impossible amount of space and energy to achieve. However, there could be a solution, but it requires us to look within. Not in a hippy-dippy “finding yourself” sort of way, but rather the double helix code found in in humans and almost all other organisms: DNA. For decades, researchers have toying around with using DNA as both a computing and a storage device. Recently, a team of researchers at Columbia University demonstrated that their coding strategy based on one strand of DNA could store 215 petabytes of information. "Performing sentiment analysis on quantum computing and DNA storage topics with Vector API, may uncover robust demand for these technologies in various industries such as healthcare." says Jo Fletcher Co-Founder Indexer.me.

What would supercomputers mean for health treatments?

The human body is an incredibly complex organism. While the markets have released many life-saving drugs, there are many barriers holding us back from realizing their maximum potential. Standard computing isn’t powerful enough to truly predict the ways a drug will react with an individual’s particular genetic composition and unique environmental factors. With quantum computing based on DNA storage, however, you would have the ability to examine pretty much any scenario imaginable by mapping a much more accurate prediction of the of any given drug’s interaction with a particular person based on their genetics and environment. With quantum computing, medical professionals will be able open a new chapter in drug prescription outcomes by tailoring each treatment to meet the exact requirements of each individual.

About Vector

Vector is a natural language processing application that performs information extraction on millions of news stories per day. It provides high value to any quantitative researcher, adding a collaborative-authoring workflow in perfect synergy with the most powerful and unique faceted search in the business. For more information, please visit www.indexer.me or jofletcher@indexer.me.

Useful Links

About Indexer

Indexer is a tech start-up in the artificial intelligence space and has a focus on computer vision and natural language processing technologies.

This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)