Skip to main content


Showing posts with the label devops

Birds-eye view of Kubernetes objects

 Kuberenetes has given a 'Software view' for the 'hardware' world. That too all resources consumed via modern definitions using json/yaml and via API.  Kuberentes segments the compute resources into Worker nodes & Master Node(s) and contain persistent entities called 'Kubernetes Objects' including Containerised applications Cluster and Associated nodes Resources to these nodes The policies and tolerances on how the applications interact and behave Below is a good diagram of the various components Each component can be defined by software/code and scalable which makes kubernetes the de-facto building framework for modern micro-service applications.  In most of the scenarios the components can be tiered into  Host/Virtual machines Kuberentes Platform Containers Microservices It is hence very important to understand the difference between traditional 2 tier model and kuberentes 4 tier model for all your Operational, Security and Observability needs for a succes

Clone multiple VM's and create multiple VM's using vagrant

Vagrant is an excellent tool for automation and doing proof of concepts (POC's). In many of the POC's you might need cluster and vagrant can do the clustering in matter of minutes by cloning an existing VM and then making into multiple Virtual machines Assumption You have basic idea of Linux & vagrant we are going to use centos/7 for vagrant guest The host is Fedora25/Redhat/CentOs system. Can be easily done for ubuntu as well Let's see the overall Summary of what we are doing to do create a working directory download and install virtualbox, then vagrant clean-up any unwanted boxes you have. putting the config file and provisioning Validating the nodes Creating a working Directory sudo su - mkdir /opt/vagrantOps cd /opt/vagrantOps Download and Install vbox, vagrant to your host (Fedora 25) vi /etc/yum.repos.d/virtualbox.repo  # with contents as per Virtualbox recommendation dnf install VirtualBox-5.1   # This will install vbox  f

Big Data - Jobs, tools and how to ace it

Big Data : Overview of Structure and Jobs  The demand for big data resources have increased dramatically in past few years. The requirements to create and get most out of "Big Data" environment is classified into 3 tiers Base Layer - DevOps and Infrastructure Mid  Layer - Understanding & manipulating data Front Layer - Analytics, data science I feel the jobs surrounding "Big Data" would also ultimately reflect this. Learning Big Data should be also based on these tiers. Software Suite/Tools Base Layer - Summary This layer forms the core infrastructure of "big data" platform and should be horizontally scalable. OS - Linux is the way forward for big data technologies. RedHat, SuSe, Ubuntu, CentOS Distributed Computing tools/software - Hadoop, Splunk Data Storage - Splunk, MongoDB, Apache Cassandra Configuration management - Ansible, Puppet, Chef Others - Networking knowledge, Version Control (Git) Mid Layer - Summary This

DevOps : Search and replace files with JSON dataset and template Engine

Today's code build & continuous deployment models are highly diverse thus leading to handwritten and complicated perl/awk/sed scripts. DevOps should come out of age old hand-crafted find and replace scripts with much modern template engines.               Of course template engines are available in wide variety. All enterprise configuration management (chef, puppet, ansible ) software are equipped with their own flavour of template engines and playbooks.  This article however concentrate on "Mustache" template  which is logicless template system and work on any text based data (Web pages, scripts, dataset, config files etc..)               The example below focusses on replacing dynamic text using a JSON dataset. Let's define terminologies Source :   Template Parent Directory with all files/directories +  dynamic variables in it Dataset :    JSON based self defining dataset to replace the above source(s) Params :    Extra parameters that are supplied (eg