AWS Certified DevOps Engineer Pro Notes

  1. You have created a role named InstanceRole and assigned that role to a policy, but you are unable to use that role with an instance – Why?
  2. A startup has developered a new restautant-rating application for mobile devices.  The application has recently increased in popularity; which has resulted in a decrease in performance of the application due to the increased load.  As chief architect of that startup, you’ve noticed that users are spending less time on the application which is driving down revenue, and you’re in charge of solving this issue.  The application has a two-tier architecture with a PHP application tier running in an Auto-Scaling group, and a MySQL RDS instance deployed with AWS CloudFormation.  The Auto Scaling group has a min value of 4 and a max value of 8 instances.  The desired capacity is now at 8 because of high CPU utilization of the instances.  Instead of increasing the max capacity of the Auto Scaling group, you realize that switching to a difference instance type with more CPU resources would solve the performance problem at a cheaper cost.  How can you change instance types while minimizing downtime for end users?
    • Update the launch configuration in the AWS CloudFormation template with the new C3 instance type.  Add an UpdatePolicy attribute to the Auto Scaling group that specifies an AutoScalingRollingUpdate.  Run a stack update with the updated template.  — Even though you can’t technically update a launch configuration when we update the LaunchConfiguration resource in AWS CloudFormation, it deletes that resource and creates a new launch configuration.  This doesn’t change existing instances – only newly launched instances (after the update).  So, to update existing instances, we need to specify an update policy attribute, and by using a rolling update, we can avoid downtime.  http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
  3. You have configured the following AWS services: Auto Scaling group, Elastic Load Balancer, and EC2 instances.  Your supervisor has requested that you terminate an instance when CPU utilization is less that 50%.  How can you do this?
    • Create a CloudWatch alarm to send a notification to the Auto Scaling group when the aggregated CPU utilization is less than 40% and configure the Auto Scaling policy to remove one instance.  — You can create a scaling policy that uses CloudWatch alarms to determine when your Auto Scaling group should scale out or scale in.  Each CloudWatch alarm watches a single metric and sends messages to Auto Scaling when the metric breaches a threshold that you specify in your policy.  You can use alarms to monitor any of the metrics that the services in AWS that you’re using send to CloudWatch, or you can create and monitor your custom metrics.  http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html
  4. You are managing a Ruby application that needs Nginx for the front end; Elasticsearch, Logstash, and Kibana for log processing; and MongoDB for document management.  you can have logical groupings of your base AMI’s that can take 80% of application binaries loaded on these AMI sets.  You have installed most applications during the bootstrapping process and after the installation based on configuration sets grouped by instance tags, Auto Scaling groups, or other instance artifacts.  You set a tag on your Nginx instances (such as Nginx-v-1.6.2).  Your update process can query for the instance tag, validate whether it’s the most current version of Nginx, and then proceed with the installation.  What happens when it is time to update the prebaked AMI?
    • You can simply swap your existing AMI with the most recent version in the underlying deployment service and update the tag
  5. When using Elastic Beanstalk application creation wizard, which options are you able to specify?
    • Whether you want a web server tier (which contains a web server and an application server) or a worker tier (which utilizes the Amazon Simple Queue Service)., What platform to use as a container for your application.  Choices include IIS, Node.js, PHP, Python, Ruby, Tomcat, or Docker.  Also, whether to launch as a single instance or create load balancing, auto scaling environment.
  6. After much consideration, based on the requirements of your deployment, you have decided to use CloudFormation instead of OpsWorks or Elastic Beanstalk.  Unfortunately, you have discovered that there is a resource type that is not supported by CloudFormation.  How can you solve this?
    • Create a custom resource type using template developer, custom resource template, and CloudFormation.
  7. You have configured the following AWS services: Auto Scaling group, Elastic Load Balancer, and EC2 instances.  Your supervisor has requested that you terminate an instance when CPU utilization is less than 50%.  How can you do this?
    • Create a CloudWatch alarm to send a notification to the Auto Scaling group when the aggregated CPU utilization is less than 40% and configure the Auto Scaling policy to remove one instance.
  8. You are currently running an application on EC2 instances which are insude of an Auto Scaling group.  You’ve implemented a system that automates deployments of configurations and the application to newly launched instances.  The system uses a configuration management tool that woks in a standalone configuration, with no master node.  Because the application load is unpredictable, new instances must be brought into service within 3 minutes of the launch of the instance operating system.  The deployment stages take the following times to complete:
    • * Installing the configuration management agent: 2 mins
    • * Configuring the instance with artifacts: 4 mins
    • * Installing the application framework: 15 mins
    • * Deploying the application code: 1 min
      • What process should you use to automate the deployment using this standalone agent configuration?
    • Bake a custom AMI that has all the components pre-installed, including the agent, configuration artifacts, application frameworks, and code.  Then have a startup script that executes the agent to configure the system on startup.
  9. You have decided to perform a rolling deployment with Elastic Beanstalk.  Elastic Beanstalk detached all instances in the batch from the load balancer, deployed the new application version, and then re-attached the instances.  None of your instances are receiving traffic.  What happened?
    • Elastic Load Balancing waits until instances pass a minimum number if Elastic Load Balancing health checks (the Healthy check count threshold value), and then starts routing traffic to them.
  10. You have an Elastic Load Balancing health check policy that pings port 80 every 20 seconds, and after passing a threshold of 10 successful pings, reports the instance as being InService.  If enough ping requests time out, then the instance is reported to be OutofService.  What can be used in conjunction with this infrastructure as a cost-effective solution?
    • Used with Auto Scaling, an instance that is OutofService could be replaced if the Auto Scaling poicy specified.  For scale-down activities, the load balancer removes the EC2 instance from the pool and drains current connections before they terminate.
  11. Which Auto Scaling process would be helpful to suspend when testing new instances before sending traffic to them, while still keeping them in your Auto Scaling group?
    • AddToLoadBalancer > Adds instances to the attached load balancer or target group when they are launched.  If you suspend AddToLoadBalancer, Auto Scaling launches the instances but does not add them to the load balancer  or target group.  If you resume the AddToLoadBalancer process, Auto Scaling resumes adding instances to the load balancer or target group when they are launched.  However, Auto Scaling does not add the instances that were launched while this process was suspended.  You must register those instances manually.
  12. You have configured the following AWS services: Auto Scaling Group, Elastic Load Balancer, and EC2 instances.  Your supervisor has requested that you terminate an instance when CPU utilization is less than 50%.  How can you do this?
    • Create a CloudWatch alarm to send a notification to the Auto Scaling group when aggregated CPU utilization is less than 40% and configure the Auto Scaling policy to remove one instance. > You can create a scaling policy that uses CloudWatch alarms to determine when your Auto Scaling group should scale out or scale in.  Each CloudWatch alarm watches a single metric and sends messages to Auto Scaling when the metric breaches a threshold that you specify in your policy.  You can use alarms to monitor any of the metrics that the services in AWS that you’re using send to CloudWatch, or you can create and monitor your own custom metrics.
  13. You currently have an Auto Scaling group with Elastic Load Balancer and need to phase out all instances and replace with a new instance type.  How can you achieve this?
    • Use the OldestLaunchConfiguration to phase out all instances that use the previous configuration.  If you are using an Elastic Load Balancer (ELB), you can attach and additional Auto Scaling configuration behind the ELB and use a similar approach to phase in newer instances while removing older instances.
  14. You are managing a Ruby application that needs Nginx for the front end; Elasticsearch, Logstash, and Kibana for log processing; and MongoDB for document management.  You can have logical groupings of your base AMI’s that can take 80% of application binaries loaded on these AMI sets.  You have installed most applications during the bootstrapping process and alter the installation based on configuration sets grouped by instance tags, Auto Scaling groups, or other instance artifacts.  You set a tag  on your Nginx instances (such as Nginx-v-1.6.2).  Your update process can query for the instance tag, validate whether it’s the most current version of Nginx, and then proceed with the installation.  What happens when it is time to update the prebaked AMI?
    • You can simply swap your existing AMI with the most recent version in the underlying deployment service and update the tag.
  15. You have multiple similar three-tier applications and have decided to use CloudFormation to maintain version control and achieve automation.  How can you best use CloudFormation to keep everything agile and maintain multiple environments while keeping costs down?
    • Create separate templates based on functionality, create nested stacks with CloudFormation. > Nested stacks allow us to link to a template from within another template, which can be useful when we resuse resources across templates and so we don’t have to re-write them repeatedly.
  16. You work for a very large pharmaceutical company that has multiple applications which are very different and built on different programming languages.  How can you deploy applications as quickly as possible?
    • Develop each app in a separate Docker container and deploy using Elastic Beanstalk > Elastic Beanstalk supports the deployment of web applications from Docker containers.  With Docker containers you can define your own runtime environment.  You can choose your own platform programming language, and any application dependencies (such as package managers or tools) that aren’t supported by other platforms.  Docker containers are self-contained and include all the configuration information and software your wen application requires to run.
  17. We have an Amazon SQS queue.  Worker nodes poll our queue for jobs.  When they find jobs, they pull down the information locally to then process those jobs.  The number of jobs fluctuates depending on many unpredictable factors.  However, the more jobs there are, the more instances we need to process those jobs in a timely manner.  How can we implement a system for this?
    • Implement CloudWatch monitoring that checks the size of the queue and triggers an Auto Scaling scale out or scale in event depending on the size of the queue.  The bigger the breach, the more instances we add (and vice versa)
  18. What are some CloudWatch metrics?
    • DiskWriteOps, CPUUtilization, NetworkIn
  19. You have protected instances from scale in, but still find that instances have been terminated.  How could this be?
    • The instances were marked as unhealthy and removed from the group.

AdSense Ad 1


Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.