Devops Interview Preparation

  1. What tools do you use on a daily basis from the Linux perspective?
    I use a variety of tools on a daily basis, including bash, ssh, scp, rsync, and vim.
  2. What are inodes in Linux?
    Inodes in Linux are data structures that contain information about a file, such as its location on disk, permissions, and ownership.
  3. Any experience with RAID configurations?
    Yes, I have experience with RAID configurations, including RAID 0, RAID 1, and RAID 5.
  4. Any experience with Docker?
    Yes, I have experience with Docker, including creating and managing containers, images, and volumes.
  5. Network connectivity that you can utilize in an authorized environment for communication between different containers?
    I use Docker’s built-in networking features, such as bridges and overlays, to enable communication between containers.
  6. Have you created your own images for containers? What application was it?
    Yes, I have created my own images for containers, including a custom image for a web application.
  7. What were the security considerations that you have taken?
    I have taken several security considerations, including using secure passwords, limiting access to containers, and using Docker’s built-in security features, such as SELinux and AppArmor.
  8. What’s the difference between a stateful set and Daemon sets?
    A stateful set is a type of Kubernetes resource that manages stateful applications, such as databases, while a Daemon set is a type of Kubernetes resource that runs a pod on each machine in a cluster.
  9. Where or why you used Statefull set. How does it help?
    I used a stateful set to manage a database application, which requires persistent storage and network identities. The stateful set helps to ensure that the database is always available and can be easily scaled.
  10. While creating a manifest file, what are the 4 major points or things that you have to add in that manifest file?
    The 4 major points that I add to a manifest file are the apiVersion, kind, metadata, and spec.
  11. How was the Jenkins environment that you’re using? What considerations have you done while you’re doing that?
    I used a Jenkins environment with a master and several slaves, and I considered factors such as scalability, security, and reliability.
  12. What is the default home directory for Jenkins for a master and for a slave?
    The default home directory for Jenkins is /var/lib/jenkins for the master and /var/lib/jenkins-slave for the slave.
  13. Were you using the root user to run the agent part of Jenkins in the slave servers?
    No, I used a non-root user to run the agent part of Jenkins in the slave servers, for security reasons.
  14. Have you worked on setting up additional storage for your worker nodes in Kubernetes?
    Yes, I have worked on setting up additional storage for worker nodes in Kubernetes, using persistent volumes and stateful sets.
  15. What have you done on the Prometheus Grafana part?
    I have set up Prometheus and Grafana to monitor and visualize metrics for a Kubernetes cluster, including CPU usage, memory usage, and network traffic.
  16. How Kuber, how Prometheus and Grafana, or like, how the logs or the monitoring would have been done in a Kubernetes cluster and how things were set up?
    I set up Prometheus and Grafana to scrape metrics from Kubernetes components, such as the API server and the scheduler, and to visualize the metrics in dashboards.
  17. What does it require that so that Prometheus gets this details?
    Prometheus requires a scrape configuration to collect metrics from targets, such as Kubernetes components or applications.
  18. Coming to Terraform, what was your role on that?
    I used Terraform to manage infrastructure as code, including creating and managing AWS resources, such as EC2 instances and S3 buckets.
  19. What modules have you worked on?
    I have worked on several Terraform modules, including modules for creating EC2 instances, S3 buckets, and RDS databases.
  20. Why was it required to create a custom module?
    I created a custom module to manage a specific type of resource that was not supported by the standard Terraform modules.
  21. What is git squash?
    Git squash is a command that combines multiple commits into a single commit, which can be useful for cleaning up a repository’s history.
  22. What is Gitstash? Why is it used?
    Gitstash is a command that temporarily saves changes to a repository, which can be useful for switching between branches or for saving work in progress.
  23. What are the different types of disks that are available in AWS?
    AWS offers several types of disks, including SSD, HDD, and NVMe.
  24. How would you ascertain instance type when you have to deploy 1? What parameters would you, you know, take into consideration?
    When deploying an instance, I would consider several parameters to determine the appropriate instance type, including:
    • Compute requirements: The number of vCPUs, memory, and storage needed to run the application.
    • Network requirements: The amount of network bandwidth and latency required for the application.
    • Storage requirements: The type and amount of storage needed, such as SSD or HDD.
    • Cost: The budget for the instance and the cost of running it.
    • Scalability: The need for scalability and flexibility in the instance type.
      • I would use tools such as the AWS Instance Type Matrix and the AWS Pricing Calculator to help determine the best instance type for the application.
  25. What tools have you used to backup?
    I have used several tools to backup data, including:
    • AWS Backup: A fully managed backup service that makes it easy to centralize and automate backups across AWS services.
    • AWS S3: An object storage service that can be used to store backups of data.
    • rsync: A command-line tool that can be used to synchronize files and directories across systems.
    • cron: A job scheduler that can be used to automate backups.
  26. Have you used S3? What was the purpose for? and classes?
    Yes, I have used S3 to store backups of data, as well as to serve static websites and store data for analytics and machine learning applications. I have used S3 classes such as S3 Standard, S3 Intelligent-Tiering, and S3 Glacier to store data based on its access frequency and retention requirements.
  27. What other AWS services have you used?
    I have used several other AWS services, including:
    • EC2: A virtual server service that provides compute capacity in the cloud.
    • RDS: A managed relational database service that makes it easy to set up, manage, and scale a relational database in the cloud.
    • Lambda: A serverless compute service that allows you to run code without provisioning or managing servers.
    • API Gateway: A fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at scale.
    • CloudWatch: A monitoring and management service that provides visibility into AWS resources and applications.
  28. Have you used load balancing based on route?
    Yes, I have used load balancing based on route, including:
    • ELB (Elastic Load Balancer): A load balancing service that automatically distributes incoming traffic across multiple targets, such as EC2 instances or containers.
    • ALB (Application Load Balancer): A load balancing service that provides advanced routing and targeting capabilities, including support for HTTP/2 and WebSockets.
    • NLB (Network Load Balancer): A load balancing service that provides high-performance, low-latency load balancing for TCP, UDP, and TLS traffic.
Tags:

Leave a Reply

Shopping cart0
There are no products in the cart!
Continue shopping
0