Friday, December 28, 2018

Mainframe Synergies for Digital Transformation

In Jul of 2018, Broadcom announced its intentions to acquire CA Technologies.  In the press release, Hock Tan, President and Chief Executive Officer of Broadcom, said:

 “This transaction represents an important building block as we create one of the world’s leading infrastructure technology companies. With its sizeable installed base of customers, CA is uniquely positioned across the growing and fragmented infrastructure software market, and its mainframe and enterprise software franchises will add to our portfolio of mission critical technology businesses. We intend to continue to strengthen these franchises to meet the growing demand for infrastructure software solutions.”
While those words look really nice on paper, the acquisition is now old news. With crunch time now here, customers of both companies are now asking,” What’s in it for me?”

Broadcom’s pursuit of this merger grew from a recognition of the magnitude of the data center
market opportunity.  Rapid growth in the companies networking and storage businesses was being driven by the even faster growth in the industry’s need to securely and reliably scale data centers.  This datacenter metamorphosis was, in turn, being driven by digital transformation initiatives across literally every industry. Always looking to improve values to their individual customers, the merger gave existing customers of both corporations the opportunity to benefit from the natural synergy of Broadcom’s industry-leading IT Infrastructure offerings and CA’s industry-leading suite of mainframe solutions.


Since the mainframe holds most of today’s enterprise data, two of CA Technology industry leading products, Zowe and Brightside, were seen as perfect compliments to any organization’s digital transformation efforts.  As a new open source software framework, Zowe provides solutions that allow development and operations teams to securely, manage, control, script and develop on the Mainframe like any other cloud platform. When paired with Brightside’s automation and self-service capabilities, this combination unlocks additional mainframe business value through cost and risk reduction. Brightside empowers next-generation developers to more easily apply their experience with modern DevOps toolchains and frameworks, helping to increase their ability to innovate for the mainframe platform.

The value of these offerings paired with Broadcom’s IT infrastructure offerings is immense.  Value of offering. Working with Broadcom’s infrastructure, enterprises can now fully meet today’s data context challenges which include:
  • Data complexity and disparate data silos that inhibit growth and drive up costs; and
  • Multiple data formats and exponential data growth further complicate the matter.
New processes that enhance business situational awareness are also helped by this combination. Organizations can now abandon the legacy view of customer engagement as a “point in time” event. With broader situational awareness, business owners can now effectively manage every customer across all of their possible touch points.  This capability enables true understanding of what a customer is doing in real-time, informing correct actions and up to the moment personalization.  The end result is a digital transformation that enhances the organization’s ability to be continuously active and engaged with your end customer in a meaningful and engaging manner.

If your organization is undertaking or undergoing digital transformation, reach out to Greg Lotko to learn more about the mainframe synergies you can gain from the Broadcom CA Technologies merger. As the General Manager for Broadcom Mainframe Business Unit, he can bring his more than 30 years of experience in application development, application outsourcing services, software development and infrastructure to your transformation initiative. His team, in fact, helped HSBC transition to weekly release cycles, which was foundational to that company’s ability to deliver 2000 deployments per month. This feat is even more impressive knowing that the financial services giant manages over 6 million artifacts through applications written by over 6,000 developers making 750,000 element changes a year.


This post was brought to you by Broadcom.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Saturday, November 10, 2018

2018 AT&T Business Summit: Security “in” and “of” the Cloud



While public cloud is undoubtedly an outsized piece of the conversation, news headlines of the latest data breach can make this move a very frightening proposition. The question of how to balance the downside of cloud computing risks with the upside of promising cost savings is, therefore, top of mind. 

The dilemma becomes even more critical as your business starts leveraging the power of 5G telecommunications services. When added to existing network architectures and combined with other next-generation technologies like the Internet of Things (IoT) and Edge-to-Edge capabilities, 5G will dramatically alter user experience – from retail to financial services, transportation to manufacturing, to healthcare and beyond.

Answering these, and other essential cybersecurity questions were the primary task during the “Security ‘In’ the Cloud and ‘Of’ the Cloud“ panel I participated in during the AT&T Business Summit in Dallas, Texas. Read more about how you should address this cloud computing transition issue in my newest blog post on AT&T Business Insights.




This content was sponsored by AT&T Business.





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Friday, November 9, 2018

My Brush with Royalty: Queen Latifah



Queen Latifah! 

Hip Hop Icon. Movie Star. Television Star. Fashion Model. Songwriter. Producer. Entrepreneurial Genius!? YES!


Dana Elaine Owens, her given name, is co-owner of Flavor Unit Entertainment, a firm that includes television and film production units, a record label and an artist management company. She has been managing herself since age 21 and within 2 years of starting her business, had signed over 17 rap groups including the hugely successful Naughty by Nature and Outkast. “Her first investment was in a delicatessen and a video store on the ground floor of the apartment she was living in at the time.” With her background, you can now see why I was really excited to see her on the main stage at the AT&T Business Summit with AT&T Communications CEO John Donovan.



Visit the AT&T Business Insights page to see her on stage and read more about my "Brush with Royalty" and how Queen Latifah's journey as a small business owner should inspire us all.

Sponsored by AT&T Business

Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Tuesday, October 16, 2018

What’s New in Puppet 5?



Puppet 5 is released and comes with several exciting enhancements and features that promise to make configuration management much more streamlined. This article will take a comprehensive look at these new features and enhancements.

Puppet 5 was released in 2017, and according to Eric Sorensen, director of product management at Puppet, the goal was to standardize Puppet as a one-stop destination for all configuration management requirements. Here are the four primary goals of this release:
  • To standardize the version numbering of all the major Puppet components (Puppet Agent, PuppetDB, and Puppet Server) to 5, and deliver them as part of a unified platform
  • To include Hiera 5 with eyaml as a built-in capability
  • To provide clean UTF-8 support
  • To move network communications to fast, interoperable JSON

Customer feedback

Customer and community feedback played a major role in setting the goals for Puppet 5’s release, having helped the developers with identifying and defining certain patterns, such as:
  • Different version numbers across components were a huge source of confusion
  • There was a lot of chaos when it came to combining components to get a working installation as well as where each component would fit
  • Since both Facter 3 and PuppetDB 3 seamlessly rolled into PC1, guaranteeing a new Puppet Collection for every major release didn’t make much sense

However, the makers ensured that one critical aspect didn’t get affected: Modules that worked on Puppet 4 will work unchanged under Puppet 5.

New features

Puppet 5 comes with some power-packed new features; have a look:
  • The call function: The call (name, args,…) function has been added, which allows you to directly call a function by its name
  • The unique function:  Earlier, you had to include the stdlib module to include the unique function. None of those hassles anymore! The unique function is now directly available in Puppet 5. What’s more, the function is also capable of handling Hash and Iterable data types. In addition, you can now also give a code block that determines whether the uniqueness has been computed.
  • Puppet Server request metrics: Puppet Server 5 now comes with am http-client metric puppetlabs..http-client.experimental.with-metric-id.puppet.report.http.full-response to enable the tracking of how long Puppet Server requests to a configured HTTP report processor take.

Enhancements

Time to take a look at some exciting new enhancements that come with Puppet 5:
  • Switched from PSON to JSON as default: Agents now download node information, catalogs, and file metadata, by default, in JSON instead of PSON in Puppet 5. The move to JSON ensures enhanced interoperability with other languages and tools, while also enabling better performance, especially when the master is parsing JSON facts and reports from agents. Plus, JSON-encoded facts can also be easily accepted in Puppet 5.
  • Ruby 2.4: Puppet now uses Ruby 2.4, which ships in the puppet-agent package. All you have ensure is to reinstall user-installed Puppet agent gems after upgrading to Puppet agent 5.0. This is necessary because of the differences in Ruby API in Ruby 2.1 and 2.4. Further, some gems may also need to be upgraded to versions compatible with Ruby 2.4.
  • HOCON gem is a dependency now: The HOCON gem, which was previously shipped in puppet-agent package is also now a dependency of the Puppet gem.
  • Silence warnings with metadata.json: You can now turn off warnings from faulty metadata.json by setting --strict=off.
  • Updated Puppet Module Tool dependencies: The gem dependencies of Puppet Module Tool are updated to use puppetlabs_spec_helper 1.2.0 or later, which runs metadata-json-lint as part of the validate rake task.
  • Hiera 5 default file: Default Hiera 5-compliant files go into confdir and env-directory. Puppet creates appropriate v5 hiera.yaml in $confdir and $environment Moreover, if Puppet detects a hiera.yaml in either $confdir or $environment, it won’t install a new file in either location or remove $hieradata.

Performance boosts

All these enhancements and new features have contributed to ushering performance boosts in a lot of aspects. The runtimes of Puppet 5 agent have decreased by 30% at equivalent loads (that is, from an average of 8 seconds to 5.5 seconds). In addition CPU utilization of Puppet 5 server has reduced by at least 20% as compared to Puppet 4 in all scenarios, while the CPU utilization for Puppet 5 PuppetDB and PostgreSQL have also significantly reduced in all scenarios.

Catalog compile times of Puppet 5 reported by Puppet Server have reduced by 7% to 10% compared to Puppet 4. Puppet 5 can now scale to 40 percent more agents with no deterioration in runtime performance, whereas Puppet 4 agent runtimes were disastrously long when scaled to the same number of agents.


If you liked this article and want to learn more about Puppet 5, you can explore Puppet 5 Cookbook – Fourth Edition. This book takes you from a basic knowledge of Puppet to a complete and expert understanding of Puppet’s latest and most advanced features. Puppet 5 Cookbook – Fourth Edition is for anyone who builds and administers servers, especially in a web operations context.

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Tuesday, October 9, 2018

5 Reasons Why Ansible is the Best CM Tool Out There?


Amidst volatile markets, dynamic technology shifts, and ever-increasing customer demands, it is imperative for IT organizations to develop flexible, scalable and high-quality applications that exceed expectations and enhance productivity. A software application has numerous moving parts, which, if not effectively maintained, will definitely affect the final quality and end user experience.

This is where configuration management (CM) comes into play, its purpose being to maintain the integrity of the product or system throughout its lifecycle while also making the deployment process controllable and repeatable in order to ensure higher quality. Robust configuration management brings the following advantages to the table:
  • Mitigates redundant tasks
  • Manages concurrent updates
  • Eliminates problems related to configuration
  • Streamlines inter-team coordination
  • Makes defect tracking easier

There are several effective CM tools out there like Puppet, Chef, Ansible. And CFEngine, which provide automation for infrastructure, cloud, compliance and security management, integration for deployment and continuous deployment (CI / CD). However, deciding on which tool to select for an organization’s automation requirements is the most critical task for a sysadmin.

A lot of sysadmins will agree that the daily chores of a sysadmin keep them from being updated about automation. When they do spend time in learning the nuances, they come across multiple CM tools that all offer the same benefits theoretically. This further complicates the decision about which CM tool to choose from, especially for people who are just getting started.

So, what is the best tool for people who have minimal idea about automation?—Ansible—and justifiably so! You may ask why. This article will discuss the five reasons that make Ansible one of the most reliable and efficient CM tools out there.
  • An end-to-end tool to simplify automation
Ansible is an end-to-end tool that aids performing all kinds of automation tasks, right from controlling and visualization to simulation and maintenance. This is because Ansible is developed in Python, which gives Ansible access to all general-purpose language features and thousands existing Python packages that you can use to create your own modules. With over 1300 modules, Ansible simplifies several aspects of IT infrastructure, including web, database, network, cloud, cluster, monitoring, and storage.

Configuration Management: Ansible’s most attractive feature is its playbooks, which are nothing but simple instructions/recipes meant to guide Ansible through the task at hand. Playbooks are written in YAML and are human-readable, which makes it all the more easier to navigate through and work with Ansible. Playbooks enable making changes to code, while also making it possible to manage desired states and idempotency natively.

Orchestration: Ansible, though highly simplified, can’t be underestimated when it comes to its orchestration power. It effortlessly integrates with any area of the IT infrastructure, be it provisioning virtual machines (VMs) or creating firewall rules. Moreover, Ansible comes in handy with aspects that other tools leave gaps in, such as zero-stop and continuous updates for multitier applications across the infrastructure.

Provisioning: With several modules for containers (Docker) and virtualization (VMWare, AWS, OpenStack, Azure, and Ovirt), Ansible can easily integrate with several tasks to provide robust and efficient automation.
  • Faster learning curve

Enabling easy initial configuration and installation, the learning curve related to Ansible is extremely quick. Figure this—you can install, configure, and execute ad-hoc commands for ‘n’ number of servers within 30 minutes, no matter what the issue is, be it daylight savings, synchronization, root security, server updation, and so on.

Moreover, it takes no time, even for a beginner, to understand the syntax and workflows, owing to the fact that it uses YAML (YAML Ain’t Markup Language). YAML is human-readable and, therefore, extremely user-friendly and easy-to-understand. Add to it the Python libraries and modules, you have a very simple yet quite powerful CM tool in your hands.
  • Highly adaptive and flexible

Unlike legacy infrastructure models, which take too long to converge to a fully automated environment, Ansible is highly flexible in this regards. As the tech space becomes increasingly dynamic, it is only understandable that the environments have to be flexible enough to imbibe any changes without affecting the output. Otherwise, it may lead to undesired costs, inter-team conflicts, and manual interventions.

Ansible, however, effortlessly adapts to mixed environments, peacefully coexisting with partial and fully automated environments alike, while also enabling seamless transition between models.
  • Full Ansible control

No agents need to be installed at the endpoints for Ansible; all you need is an Ansible-installed server, managing access to servers through SSH (for Linux environments) and WINRM (Windows Remote Access) protocols. Thanks to playbooks, all the desired settings on the hosts defined in the inventory can also run ad-hoc via the command line without any file definitions required whatsoever. This makes it much faster than the traditional client-server models.
  • Instant automation

Right from the instant you can ping the hosts through Ansible, you can start automating your environment immediately. It’s advisable to begin with smaller tasks, duly following best practices, and prioritize tasks that contribute to achieving the business goals. This will help identify and solve problems much swiftly, while also gaining time and enhancing efficiency.

In a nutshell, where Ansible wins over its competitors is in its simplicity—even a beginner can master it in no time—and its powerful features that make configuration management a cakewalk. Choosing Ansible will help heal the Achille’s Heel of automation while also majorly enhancing productivity and efficiency.


If you found this article interesting and wish to learn more about Ansible, you can explore Learn Ansible, an end-to-end guide that will aid you in effectively automating cloud, security, and network infrastructure. Learn Ansible follows a hands-on approach to give you practical experience in writing playbooks and roles, and executing them. 

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)



Monday, October 1, 2018

Machine learning APIs for Google Cloud Platform


Google Cloud Platform (GCP) is considered to be one of the Big 3 cloud platforms among Microsoft Azure and AWS. GCP is widely used cloud solutions supporting AI capabilities to design and develop smart models to turn your data into insights at a cheap, affordable cost.
The following excerpt is taken from the book ‘Cloud Analytics with Google Cloud Platform‘ authored by Sanket Thodge.
GCP offers many machine learning APIs, among which we take a look at the 3 most popular APIs:
Cloud Speech API
A powerful API from GCP! This enables the user to convert speech to text by using a neural network model. This API is used to recognize over 100 languages throughout the world. It can also support filter of unwanted noise/ content from a text, under various types of environments. It supports context-awareness recognition, works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise Robustness, Inappropriate Content Filtering and supports for integration with other APIs of GCP.
 The architecture of the Cloud Speech API is as follows:

In other words, this model enables speech to text conversion by ML.
The components used by the Speech API are:
·         REST API or Google Remote Procedure Call (gRPC) API
·         Google Cloud Client Library
·         JSON API
·         Python
·         Cloud DataLab
·         Cloud Data Storage
·         Cloud Endpoints
The applications of the model include:
·         Voice user interfaces
·         Domotic appliance control
·         Preparation of structured documents
·         Aircraft / direct voice outputs
·         Speech to text processing
·         Telecommunication
It is free of charge for 15 seconds per usage, up to 60 minutes per month. More than that will be charged at $0.006 per usage.
Now, as we have learned about the concepts and the applications of the model, let’s learn some use cases where we can implement the model:
·         Solving crimes with voice recognition: AGNITIO, A voice biometrics specialist partnered with Morpho (Safran) to bring Voice ID technology into its multimodal suite of criminal identification products.
·         Buying products and services with the sound of your voice: Another most popular and mainstream application of biometrics, in general, is mobile payments. Voice recognition has also made its way into this highly competitive arena.
·         A hands-free AI assistant that knows who you are: Any mobile phone nowadays has voice recognition software in the form of AI machine learning algorithms.
Cloud Translation API
Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become the main focus of NLP group for many years. MT deals with translating text from the source language to text in the target language. Cloud Translation API provides a graphical user interface to translate an inputted string of a language to targeted language, it’s highly responsive, scalable and dynamic in nature.
This API enables translation among 100+ languages. It also supports language detection automatically with accuracy. It provides a feature to read a web page contents and translate to another language, and need not be text extracted from a document. The Translation API supports various features such as programmatic access, text translation, language detection, continuous updates and adjustable quota, and affordable pricing.
The following image shows the architecture of the translation model: 

In other words, the cloud translation API is an adaptive Machine Translation Algorithm.
The components used by this model are:
·         REST API
·         Cloud DataLab
·         Cloud data storage
·         Python, Ruby
·         Clients Library
·         Cloud Endpoints
The most important application of the model is the conversion of a regional language to a foreign language.
The cost of text translation and language detection is $20 per 1 million characters.
Use cases
Now, as we have learned about the concepts and applications of the API, let’s learn two use cases where it has been successfully implemented:
·         Rule-based Machine Translation
·         Local Tissue Response to Injury and Trauma
We will discuss each of these use cases in the following sections.
Rule-based Machine Translation
The steps to implement rule-based Machine Translation successfully are as follows:
1.   Input text
2.   Parsing
3.   Tokenization
4.   Compare the rules to extract the meaning of prepositional phrase
5.   Find word of inputted language to word of the targeted language
6.   Frame the sentence of the targeted language
Local tissue response to injury and trauma
We can learn about the Machine Translation process from the responses of a local tissue to injuries and trauma. The human body follows a process similar to Machine Translation when dealing with injuries. We can roughly describe the process as follows:
1.   Hemorrhaging from lesioned vessels and blood clotting
2.   Blood-borne physiological components, leaking from the usually closed sanguineous compartment, are recognized as foreign material by the surrounding tissue since they are not tissue-specific
3.   Inflammatory response mediated by macrophages (and more rarely by foreign-body giant cells)
4.   Resorption of blood clot
5.   Ingrowth of blood vessels and fibroblasts, and the formation of granulation tissue
6.   Deposition of an unspecific but biocompatible type of repair (scar) tissue by fibroblasts
Cloud Vision API
Cloud Vision API is powerful image analytic tool. It enables the users to understand the content of an image. It helps in finding various attributes or categories of an image, such as labels, web, text, document, properties, safe search, and code of that image in JSON. In labels field, there are many sub-categories like text, line, font, area, graphics, screenshots, and points. How much area of graphics involved, text percentage, what percentage of empty area and area covered by text, is there any image partially or fully mapped in web are included web contents.
The document consists of blocks of the image with detailed description, properties show that the colors used in image is visualized. If any unwanted or inappropriate content is removed from the image through safe search. The main features of this API are label detection, explicit content detection, logo and landmark detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported for many languages. It does not support face recognition system.
The architecture for the Cloud Vision API is as follows:

We can summarize the functionalities of the API as extracting quantitative information from images, taking the input as an image and the output as numerics and text.
The components used in the API are:
·         Client Library
·         REST API
·         RPC API
·         OCR Language Support
·         Cloud Storage
·         Cloud Endpoints
Applications of the API include:
·         Industrial Robotics
·         Cartography
·         Geology
·         Forensics and Military
·         Medical and Healthcare
Cost: Free of charge for the first 1,000 units per month; after that, pay as you go.
Use cases
This technique can be successfully implemented in:
·         Image detection using an Android or iOS mobile device
·         Retinal Image Analysis (Ophthalmology)
We will discuss each of these use cases in the following topics.
Image detection using Android or iOS mobile device
Cloud Vision API can be successfully implemented to detect images using your smartphone. The steps to do this are simple:
1.   Input the image
2.   Run the Cloud Vision API
3.   Executes methods for detection of Face, Label, Text, Web and Document properties
4.   Generate the response in the form of phrase or string
5.   Populate the image details as a text view
Retinal Image Analysis – ophthalmology
Similarly, the API can also be used to analyze retinal images. The steps to implement this are as follows:
1.   Input the images of an eye
2.   Estimate the retinal biomarkers
3.   Do the process to remove the effected portion without losing necessary information
4.   Identify the location of specific structures
5.   Identify the boundaries of the object
6.   Find similar regions in two or more images
7.   Quantify the image with retinal portion damage
You can learn a lot more about the machine learning capabilities of GCP on their official documentation page.

If you found the above excerpt useful, make sure you check out our book ‘Cloud Analytics with Google Cloud Platform‘ for more information on why GCP is a top cloud solution for machine learning and AI.


( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)