Comparing Swarm vs Kubernetes Terminology

Comparing Swarm vs Kubernetes Terminology

One of the things that my team and I are sticklers about is using the proper terminology for the proper technology. It’s just not the debate of “Principle” vs “Principal”, it’s the inferred technology that is applied by the terms you use. For instance, if a client told me they had a Docker Cluster, I might infer they are using Kubernetes, as that is correct terminology. When, in fact, they may be using Docker EE and should have used the term Docker Swarm.

Recently, when I was learning more about Docker EE Swarm at DockerCon ‘19, I started to realize the concepts are similar in Kubernetes, but the terminology was subtly different. I started to put together my own cheat sheet of similar components, so that I could keep things straight between the different sessions I attended. Now, I can speak both Swarm and Kubernetes without mingling terms, something my team will certainly appreciate!

Swarm Term Kubenetes Term Loose Definition
Swarm Cluster A group of machines that are running that provide high availability of containers.
Node Cluster Member Either a physical or virtual host that is participating within the Swarm/Cluster.
Manager Master Manages the strategy of how work is distributed within the Swarm/Cluster.
Worker (Worker) Node A participating member of the Swarm/Cluster that is providing compute capacity.
Container Container A standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another.
Task Pod A group of containers that are deployed together on the same host.
Service ReplicaSet Starts and manages the tasks/pods, ensuring the desired state.
Service Deployment Provides declarative updates to ensure the desired state is maintained.
Stack Stack A collection of services to run an application.
VIP ClusterIP Service The IP address representing the service definition.

Please contact us for your Docker needs.

CleanSlate is now a Red Hat Partner

CleanSlate is now a Red Hat Partner

CleanSlate is an IT partner who is always by your side. By working with Red Hat, the industry’s top enterprise Linux vendor, you have a partner in the planning, deployment, and maintenance of your infrastructure. Among the many offerings from Red Hat, the leading product modernizing and optimizing the IT industry is Red Hat Enterprise Linux (RHEL). Organizations in the early stages of IT modernization currently running IBM Power Servers are creating Linux LPARs to migrate workloads to Red Hat Enterprise Linux. Many other organizations who are facing high licensing, maintenance and support costs are choosing to move off legacy UNIX systems to a modern, open source Linux infrastructure in order to adopt Red Hat solutions for application development, mobile, deployment, and cloud. Migrating workloads to Red Hat Enterprise Linux – the enterprise-ready, open source operating system – improves productivity and efficiency across your enterprise IT systems and enables an agile IT infrastructure to address new business demands while significantly reducing operational costs. Red Hat Enterprise Linux maximizes the value of your existing infrastructure by introducing new technologies in an incremental, balanced way. Please contact us for your Red Hat needs.
Connecting with Students at IUPUI

Connecting with Students at IUPUI

I had the great opportunity on October 25th to join other Salesforce Community Members on a Speaker Panel at IUPUI. The purpose of this event was to expose students in Technology Learning Programs what a career working the Salesforce Ecosystem is all about. This was all put in place by our very own Quinn McPhail (CleanSlate Salesforce Intern and IUPUI Salesforce User Group Leader). I was joined by these other members of the Salesforce Community local to Indiana:
  • Eric Dreshfield (Salesforce MVP / Advocacy Manager @ Apttus / S. Indiana Salesforce User Group Leader / Midwest Dreamin’ Founder
  • Mike Martin (Salesforce MVP / Director, Indianapolis Delivery Center @ Appirio / Indianapolis Salesforce User Group Leader)
  • Melissa Davis (CMO @ nimblejack)
  • Susan Punnoose (Senior Software Engineer @ Salesforce)
  • Scott Sondermann (Solutions Engineer Scout @ Salesforce)
During this speaker panel we were asked series of questions including what lead us to a career working in the Salesforce Ecosystem, how we developed the necessary skills, and advice we could provide to future Trailblazers. As a former teacher it was very exciting to get in front of a group of students and potentially have an impact on their future career path. My biggest personal take away from this was how involved and connected the Salesforce Community is. You heard a consistent message; the Salesforce Ecosystem is one of a kind and truly a family. There are a ton of resources out there to skill up and get support from other members that have gone through similar or the same struggles you have; all you have to do is ask! If you are not currently on the Salesforce Trailblazer Community, it is time to join! Overall, the biggest piece of advice I could share to these students and really anyone that is working or planning on working in the Salesforce Ecosystem is take advantage of the resources that are put out in the Trailblazer Community and on Trailhead. These resources will guarantee your success in your career with your effort. Salesforce is ever changing and keeping up to date with the new product offerings and features will set you apart from others in the community as you are looking for a job. Secondly, it was important to me to share that soft skills are just important as technical knowledge. No matter your role in Salesforce, whether it is a consultant, developer, or admin, you need to be able to communicate with business users on your solution ideas. You can have the greatest technical solution, but if you cannot communicate that solution to the business and explain how the solution will work in their terms, they will not buy into your solution and adopt. It was best said by one of the IUPUI instructors. “You must be able to convert the nerd to business, and the business to nerd.” If you are new to the ecosystem or a veteran, take advantage of the resources that are out there. Get on Trailhead. Post questions and answers on the Trailblazer Community. Join your local Salesforce User Groups! I look forward to seeing you at a future Salesforce Event!

ILMT: A Gateway to SAM?

To take advantage Sub-capacity licensing and its inherent value, IBM customers are required to: 1) Install either the IBM License Metric Tool (ILMT) or BigFix Inventory (BFI). 2) Produce audit reporting on at least a quarterly basis, and 3) Retain such reports continuously for 2 years. However, while IBM mandates the use of these tools for Sub-capacity license owners, it does not contractually require that they be used to their optimal capabilities. This is where many customers miss an important opportunity to start down the road toward formalized Software Asset Management.

IBM requires the use of these tools only where sub-capacity software is installed, but if the BigFix agenting is installed elsewhere, organizations can discover other non sub-capacity IBM software instances (and in the case of BigFix Inventory, over 40,000 other software titles from over 9,600 other publishers).

Like many IBM customers, the last company I worked for installed ILMT, and struggled with it initially. Early versions of the tool were hardly perfect, and this gave it a “bad rep” with the IT staff. With perseverance however, we were eventually able to stabilize the application and begin to extract some value from it in the form of software discovery reporting and the requisite audit reporting.

We quickly realized, however, that the value of software discovery data (on their own) is extremely limited. To make sense of the data that were captured, more background information was needed.

Specifically we required a complete understanding of the IBM licensing inventory we held; without which we were unable to accurately bundle products within ILMT or produce even the simplest reconciliation to our Effective License Position.

Once this was compiled however, we found that we were able to better control our software estate, and as an unforeseen secondary benefit, found ourselves well on our way down the path of Software Asset Management (all because we were using a contractually required IBM tool.)

If you are interested in understanding how ILMT can provide additional value and be a ‘Gateway to SAM’ for your own journey, please read the complete article at: //itak.iaitam.org/ilmt-gateway-sam.

Bundling Best Practices – Exclusion and Suppression

Bundling Best Practices – Exclusion and Suppression

Perhaps the most important aspect of fully utilizing IBM’s ILMT (or Big Fix Inventory) tool is to be able to accurately create bundling relationships. For those that are unfamiliar, bundling is the process where users confirm the specific IBM Product that each discovered software instance relates to. In a generic sense, each confirmed bundling creates a record in the database that specifically identifies which purchased license is used to cover each discovered software instance.

Sometimes, however, a software signature is discovered, that is identified in some way as “inappropriate” – as bundling it to a product would falsely increase the amount of licensing required to cover that product.

Reasons for this inappropriateness are varied, and new scenarios can be identified at any time.

Some examples include:
• Products that have already been uninstalled – but certain registry or XML signatures continue to scan
• Components of an IBM product that are used and licensed by third party software developers as part of one of their products (e.g., Cognos, DB2, etc.)
• Components that are discovered on servers that are used exclusively as code repository file systems
• Components that are misidentified or in some other way disputed – where an IBM Service Request ticket has been submitted and addressed with no effective remediation recommendations provided.
• Certain cases where software installations are in place as Disaster Recovery backup*
In these and similar cases, something needs to be done to avoid potential over counting; and there are 2 options available: Exclusion and Suppression.

The function of Exclusion allows the software instance to continue to scan – and show up on reporting, but as a No Charge instance. Suppression on the other hand, effectively removes the instance from being considered for bundling or other tool analytic functions. So, the question really is: which process to use?

Regardless, it is up to the ILMT/BFI tool user to control the implementation of these options – to avoid inappropriately undercounting their software.

To do this, I suggest the following Best Practice rules for use:
1) Use these options only as a last resort. If signature files continue to scan – after uninstallation, investigate and remove them manually (if possible). If you have code repositories – confirm that the software versions are still current – if not, then uninstallation is preferred.

2) Only use the Suppression functionality for the limited case where the discovered signature is truly a “False Positive” – Otherwise, stick to Exclusions.

3) For both functions, you can enter a comment to explain the reason why these actions were taken. Do this for every exclusion or suppression, but do so in the following manner:

a) Identify who executed the Exclusion/Suppression and the Date when it was invoked
b) Briefly state a reason – and include the IBM service request ticket number if appropriate.
c) If you were able to research online, any “evidence” for your conclusion that the instance should not be counted – include a URL link to that information as well.
d) Keep a list of any reasons that will “pop up” again in the future, so you can reuse the same text for future Exclusions/Suppressions
e) Take advantage of the “rules definitions” functionality if appropriate

For Example:

DMGILBER – 01/08/2018: Per T10007005 Based on rules confirmed by IBM – this instance of IIB does not require licensing.

By creating a policy for the use of these functions in a uniform manner, users will maintain control over their ILMT/BFI data and reporting. Also, note that both functionalities can be reversed – in case they were applied in error.
The Exclusion and Suppression functionality in ILMT and BFI are there to make your reporting and bundling as accurate as possible, but if used haphazardly or inappropriately, they can be a source of confusion, risk and error. If you keep these well controlled, they are valuable functions that will serve you well.

* See IBM documentation for complete rules regarding Backup licensing

If you’d like to learn more about bundling best practices contact us!