This repository is used to document all the steps and roadmaps that contribute in the learning " DevOps ". I will be continue this journey till the end of 2023. The reason for documenting these days is so that others can take something from it and also hopefully enhance the resources.
This will not cover all things "DevOps" but it will cover the areas that I feel will benefit my learning and understanding overall.
In this journey we will build many projects
On premises DevOps and infra
π The quickest way to get in touch is going to be via LinkedIn, my handle is @ Bilal Mazhar
π Learning strategy and calendar can be found here Learning Plan
[βοΈ] = Content uploaded
[π§] = In - Progress
[βοΈ] = Not Started
DevOps is a methodology that focuses on collaboration and communication between software development teams and IT operations teams to streamline software delivery and improve efficiency. To become a DevOps practitioner, you need to have a solid foundation in several areas, including:
- Software Development
- System Administration
- Automation
- Cloud Computing
- Collaboration and Communication
- Continuous Integration and Continuous Deployment
- Containers and Oreshtration
[βοΈ] βΎοΈ 1 : Introudction to DevOps
[βοΈ] βΎοΈ 2 : 12 Factor Application and its security
[βοΈ] π§ 3 : Intruduction to Cloud Native
[βοΈ] π§ 3 : Intruduction to Linux
[βοΈ] π₯ 4 : Introudction to Scripting Language
[βοΈ] π 5 : Introudction to Python Programming
[βοΈ] πΉ 6 : Introudction to Go Programming
[βοΈ] βοΈ 7 : Introudction to the Cloud
[βοΈ] βοΈ 8 : Introudction to the Container's
[βοΈ] βΎοΈ 9 : Day in life of DevOps Engineer
[βοΈ] π 1 : DevOps Books
[βοΈ] π 2 : Bash Practics Scripts
[βοΈ] π 3 : Python Practics Scripts
[βοΈ] π 4 : GO Practics Scripts
Hands-on experience is essential to learning DevOps, as it provides an opportunity to apply theoretical knowledge in a practical setting, and to gain a deeper understanding of how DevOps tools and practices work in the real world.
Some key reasons why hands-on experience is important in learning DevOps include:
1.Learning by doing: Hands-on experience allows you to actively engage with DevOps tools and practices, and to learn through trial and error. This can help to reinforce theoretical concepts, and to develop a more intuitive understanding of how things work.
2.Building practical skills: By working with DevOps tools and practices in a real-world setting, you can develop practical skills that are directly applicable to your work as a DevOps professional.
3.Gaining confidence: Hands-on experience can help to build confidence in your ability to work with DevOps tools and practices, and to tackle complex problems in a production environment.
4.Improving problem-solving skills: By working through real-world problems and challenges, you can develop your problem-solving skills and learn how to troubleshoot issues in a production environment.
5.Developing a portfolio: Hands-on experience can help you build a portfolio of work that demonstrates your skills and experience to potential employers or clients.
[βοΈ] π¬ 1 : Tools of DevOps
[βοΈ] π¬ 2 : Vmware || VituaBox isntallations
[βοΈ] π¬ 3 : Linux
[βοΈ] π¬ 4 : Python || Go IDE's Installations
[βοΈ] π¬ 5 : Git
[βοΈ] π¬ 5 : Github
[βοΈ] π¬ 6 : Jenskins
[βοΈ] π¬ 7 : Ansible
[βοΈ] π¬ 7 : Docker
[βοΈ] π¬ 8 : Kubernetes
[βοΈ] π¬ 9 : Openeshift
[βοΈ] π¬ 10 : OpenStack
[βοΈ] π¬ 11 : Terrafrom
[βοΈ] π¬ 12 : AWS Account - Setup
[βοΈ] π¬ 12 : Networking Lab - setup
[βοΈ] π¬ 13 : Azure Account - Setup
[βοΈ] π¬ 14 : Google cloud Account - Setup
[βοΈ] π¬ 15 : PowerBI
[βοΈ] π¬ 16 : ELK
[βοΈ] π¬ 17 : Tablue
[βοΈ] π¬ 17 : OpenStack
Linux administration is the process of managing and maintaining a Linux-based operating system (OS). It involves configuring and managing various aspects of the system, such as hardware, software, and network components, to ensure that the system runs smoothly and efficiently.
Some common tasks involved in Linux administration include:
1. Installing and configuring the operating system and software packages
2. Managing users and groups, including setting permissions and access controls
3. Managing file systems, including creating and managing partitions, directories, and files
4. Managing network configurations, including configuring network interfaces, DNS, and routing
5. Monitoring system performance, including tracking resource utilization and identifying and resolving bottlenecks
6. Managing backups and disaster recovery processes
7. Configuring and managing security features, including firewalls, intrusion detection and prevention, and access controls.
[βοΈ] π¨ 1 : what is System administrationn ?
[βοΈ] π 2 : What are the tasks of Systems administration?
[βοΈ] π‘οΈ 3 : Day in life of Systems administrator
Networking fundamentals refer to the basic concepts, principles, and technologies that underlie computer networking. Here are some key networking fundamentals:
1. Network architecture: This refers to the design and layout of a computer network. It includes the physical components (such as servers, switches, routers, and cables) as well as the logical components (such as protocols and services) that define how the network functions.
2. Protocols: These are rules and standards that govern how data is transmitted over a network. Examples of common protocols include TCP/IP, HTTP, and SMTP.
3. Network topologies: These refer to the physical layout of a network. Examples of network topologies include star, bus, and mesh.
4. Network addressing: This involves assigning unique addresses to each device on the network, which enables them to communicate with each other. Common network addressing schemes include IP addressing and MAC addressing.
5. Routing: This refers to the process of directing data packets between different networks. Routers are used to perform this function.
6. Network security: This involves implementing measures to protect a network from unauthorized access, data breaches, and other security threats. Examples of network security measures include firewalls, VPNs, and encryption.
7. Wireless networking: This involves the use of wireless technologies to connect devices to a network. Examples of wireless networking technologies include Wi-Fi, Bluetooth, and cellular networks
[βοΈ] π 1 : Introudction to Networking
[βοΈ] π 2 : OSI Model - 7 Layers of network
[βοΈ] π 3 : Network Protocols
[βοΈ] π 4 : Introduction to GNS3
Version control is a system that enables you to manage changes to a file or set of files over time. It is commonly used in software development to track changes to source code, but it can also be used for other types of files such as documents, images, and configuration files.
[βοΈ] π 1 : Introduction to Version Control
[βοΈ] π 2 : Git
[βοΈ] π± 3 : Github
[βοΈ] π¦ 4 : GitLab
Continuous Integration (CI) and Continuous Deployment (CD) are two related concepts in software development that aim to streamline the process of building, testing, and deploying software.
1. Continuous Integration (CI) is the practice of frequently merging code changes from multiple developers into a shared code repository. Each code change is automatically built and tested to detect integration errors early and prevent issues from being introduced into the codebase.
2. Continuous Deployment (CD) is the practice of automatically deploying changes to the production environment after they have been built and tested in a staging or testing environment. This ensures that new features and bug fixes are delivered to users quickly and reliably.
[βοΈ] ποΈ 1 : Introduction to CI / CD
[βοΈ] ποΈ 2 : Jenkins
[βοΈ] ποΈ 3 : GitLab CI/CD
[βοΈ] ποΈ 4 : CircleCI
[βοΈ] ποΈ 1 : Python
[βοΈ] ποΈ 3 : Python - Django
[βοΈ] ποΈ 5 : Java
[βοΈ] ποΈ 7 : Java Web Programming
[βοΈ] ποΈ 9 : Go
A container is a lightweight, portable unit of software that encapsulates an application and its dependencies, allowing it to run consistently across different environments. Containers are an important technology for modern software development and deployment, particularly in the context of cloud computing and microservices architecture.
1.Portability: Containers can run on any platform that supports containerization, providing a consistent environment for applications regardless of the underlying infrastructure.
2.Scalability: Containers can be easily replicated and deployed across multiple hosts, making it easy to scale applications up or down based on demand.
3.Efficiency: Containers are lightweight and can be started and stopped quickly, making them an efficient way to manage resources and reduce costs.
4.Consistency: Containers provide a consistent runtime environment for applications, reducing the likelihood of configuration errors and compatibility issues.
[βοΈ] ποΈ 1 : Introduction to Containers
[βοΈ] ποΈ 2 : Docker
Kubernetes, also known as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
1.Container orchestration: Kubernetes automates the deployment, scaling, and management of containerized applications, making it easy to manage large and complex deployments.
2.Self-healing: Kubernetes can automatically detect and recover from failures in the system, ensuring that applications are always available.
3.Load balancing: Kubernetes provides built-in load balancing for containerized applications, distributing traffic across multiple instances of an application.
4.Scalability: Kubernetes enables applications to scale up or down based on demand, automatically provisioning or decommissioning resources as needed.
5.ollouts and rollbacks: Kubernetes provides a way to manage updates and changes to applications, allowing for rollouts and rollbacks of new versions without disrupting service.
[βοΈ] ποΈ 1 : Introduction to Orchestration
[βοΈ] ποΈ 2 : Kubernetes
[βοΈ] ποΈ 2 : Openshift
Infrastructure as Code (IaC) is a practice of defining and managing IT infrastructure using code, just like software applications. It involves using declarative or imperative code to automate the provisioning, configuration, and management of infrastructure resources such as servers, networks, storage, and applications.
[βοΈ] ποΈ 1 : Introduction to IaC
[βοΈ] ποΈ 2 : Ansible
[βοΈ] ποΈ 3 : Terraform
In the context of computing, "cloud" generally refers to a network of remote servers that are used to store, manage, and process data and applications, rather than using a local server or personal computer. Cloud computing allows users to access computing resources, such as processing power, storage, and networking, over the internet on an on-demand basis, without the need for physical hardware or infrastructure.
[βοΈ] ποΈ 1 : Introduction to Cloud
[βοΈ] ποΈ 2 : AWS Certified Cloud Practitioner
[βοΈ] ποΈ 3 : Microsoft Certified: Azure Fundamentals
[βοΈ] ποΈ 4 : Google Associate Cloud Engineer
Cloud DevOps refers to the practice of applying DevOps principles and practices to cloud-based infrastructure and applications. This involves using cloud-based services and tools to build, deploy, and manage applications, while also leveraging DevOps practices such as continuous integration and delivery, automation, and monitoring.
[βοΈ] ποΈ 1 : Introduction to Cloud DevOps
[βοΈ] ποΈ 2 : AWS Certified DevOps Engineer
[βοΈ] ποΈ 3 : Microsoft Certified: DevOps Engineer Expert
[βοΈ] ποΈ 4 : Google Professional Cloud DevOps Engineer
Monitoring is an essential aspect of DevOps, as it enables teams to quickly detect and respond to issues in production environments. Monitoring involves collecting, analyzing, and visualizing data from applications and infrastructure, in order to identify trends, anomalies, and performance issues that may impact the user experience.
[βοΈ] ποΈ 1 : Introduction to Monitoring , Logs and Virtualization
[βοΈ] ποΈ 2 : Splunk
[βοΈ] ποΈ 3 : PowerBI
[βοΈ] ποΈ 4 : Tablue
[βοΈ] ποΈ 5 : ELK
[βοΈ] ποΈ 5 : Nagios
[βοΈ] ποΈ 5 : Openstack
[βοΈ] Feedback