Last Updated on October 3, 2021 by Admin 2
SOA-C01 : AWS-SysOps : Part 13
A user has configured Auto Scaling with 3 instances. The user had created a new AMI after updating one of the instances. If the user wants to terminate two specific instances to ensure that Auto Scaling launches an instances with the new launch configuration, which command should he run?
- as-delete-instance-in-auto-scaling-group <Instance ID> –no-decrement-desired-capacity
- as-terminate-instance-in-auto-scaling-group <Instance ID> –update-desired-capacity
- as-terminate-instance-in-auto-scaling-group <Instance ID> –decrement-desired-capacity
- as-terminate-instance-in-auto-scaling-group <Instance ID> –no-decrement-desired-capacity
The Auto Scaling command as-terminate-instance-in-auto-scaling-group <Instance ID> will terminate the specific instance ID. The user is required to specify the parameter as –no-decrement-desired-capacity to ensure that it launches a new instance from the launch config after terminating the instance. If the user specifies the parameter –decrement-desired-capacity then Auto Scaling will terminate the instance and decrease the desired capacity by 1.
A user has launched an EC2 instance from an instance store backed AMI. If the user restarts the instance, what will happen to the ephemeral storage data?
- All the data will be erased but the ephemeral storage will stay connected
- All data will be erased and the ephemeral storage is released
- It is not possible to restart an instance launched from an instance store backed AMI
- The data is preserved
A user can reboot an EC2 instance using the AWS console, the Amazon EC2 CLI or the Amazon EC2 API. Rebooting an instance is equivalent to rebooting an operating system. However, it is recommended that the user use Amazon EC2 to reboot the instance instead of running the operating system reboot command from the instance. When an instance launched from an instance store backed AMI is rebooted all the ephemeral storage data is still preserved.
A user has launched an EC2 instance. However, due to some reason the instance was terminated. If the user wants to find out the reason for termination, where can he find the details?
- It is not possible to find the details after the instance is terminated
- The user can get information from the AWS console, by checking the Instance description under the State transition reason label
- The user can get information from the AWS console, by checking the Instance description under the Instance Status Change reason label
- The user can get information from the AWS console, by checking the Instance description under the Instance Termination reason label
An EC2 instance, once terminated, may be available in the AWS console for a while after termination. The user can find the details about the termination from the description tab under the label State transition reason. If the instance is still running, there will be no reason listed. If the user has explicitly stopped or terminated the instance, the reason will be “User initiated shutdown”.
A user has created a VPC with CIDR 22.214.171.124/24. The user has used all the IPs of CIDR and wants to increase the size of the VPC. The user has two subnets: public (126.96.36.199/28) and private (188.8.131.52/28). How can the user change the size of the VPC?
- The user can delete all the instances of the subnet. Change the size of the subnets to 184.108.40.206/32 and 220.127.116.11/32, respectively. Then the user can increase the size of the VPC using CLI
- It is not possible to change the size of the VPC once it has been created
- The user can add a subnet with a higher range so that it will automatically increase the size of the VPC
- The user can delete the subnets first and then modify the size of the VPC
Once the user has created a VPC, he cannot change the CIDR of that VPC. The user has to terminate all the instances, delete the subnets and then delete the VPC. Create a new VPC with a higher size and launch instances with the newly created VPC and subnets.
A user has configured ELB with SSL using a security policy for secure negotiation between the client and load balancer. Which of the below mentioned security policies is supported by ELB?
- Dynamic Security Policy
- All the other options
- Predefined Security Policy
- Default Security Policy
Elastic Load Balancing uses a Secure Socket Layer (SSL. negotiation configuration which is known as a Security Policy. It is used to negotiate the SSL connections between a client and the load balancer. ELB supports two policies:
Predefined Security Policy, which comes with predefined cipher and SSL protocols;
Custom Security Policy, which allows the user to configure a policy.
A user has granted read/write permission of his S3 bucket using ACL. Which of the below mentioned options is a valid ID to grant permission to other AWS accounts (grantee. using ACL?
- IAM User ID
- S3 Secure ID
- Access ID
- Canonical user ID
An S3 bucket ACL grantee can be an AWS account or one of the predefined Amazon S3 groups. The user can grant permission to an AWS account by the email address of that account or by the canonical user ID. If the user provides an email in the grant request, Amazon S3 finds the canonical user ID for that account and adds it to the ACL. The resulting ACL will always contain the canonical user ID for the AWS account, and not the AWS account’s email address.
A user has configured an ELB to distribute the traffic among multiple instances. The user instances are facing some issues due to the back-end servers. Which of the below mentioned CloudWatch metrics helps the user understand the issue with the instances?
CloudWatch is used to monitor AWS as well as the custom services. For ELB, CloudWatch provides various metrics including error code by ELB as well as by back-end servers (instances). It gives data for the count of the number of HTTP response codes generated by the back-end instances. This metric does not include any response codes generated by the load balancer. These metrics are:
The 2XX class status codes represents successful actions
The 3XX class status code indicates that the user agent requires action
The 4XX class status code represents client errors
The 5XX class status code represents back-end server errors
A user has launched an EC2 instance store backed instance in the US-East-1a zone. The user created AMI #1 and copied it to the Europe region. After that, the user made a few updates to the application running in the US-East-1a zone. The user makes an AMI#2 after the changes. If the user launches a new instance in Europe from the AMI #1 copy, which of the below mentioned statements is true?
- The new instance will have the changes made after the AMI copy as AWS just copies the reference of the original AMI during the copying. Thus, the copied AMI will have all the updated data
- The new instance will have the changes made after the AMI copy since AWS keeps updating the AMI
- It is not possible to copy the instance store backed AMI from one region to another
- The new instance in the EU region will not have the changes made after the AMI copy
Within EC2, when the user copies an AMI, the new AMI is fully independent of the source AMI; there is no link to the original (source. AMI. The user can modify the source AMI without affecting the new AMI and vice a versa. Therefore, in this case even if the source AMI is modified, the copied AMI of the EU region will not have the changes. Thus, after copy the user needs to copy the new source AMI to the destination region to get those changes.
A user runs the command “dd if=/dev/zero of=/dev/xvdfbs=1M” on a fresh blank EBS volume attached to a Linux instance. Which of the below mentioned activities is the user performing with the command given above?
- Creating a file system on the EBS volume
- Mounting the device to the instance
- Pre warming the EBS volume
- Formatting the EBS volume
When the user creates a new EBS volume and is trying to access it for the first time it will encounter reduced IOPS due to wiping or initiating of the block storage. To avoid this as well as achieve the best performance it is required to pre warm the EBS volume. For a blank volume attached with a Linux OS, the “dd” command is used to write to all the blocks on the device. In the command “dd if=/dev/zero of=/dev/xvdfbs=1M” the parameter “if =import file” should be set to one of the Linux virtual devices, such as /dev/zero. The “of=output file” parameter should be set to the drive that the user wishes to warm. The “bs” parameter sets the block size of the write operation; for optimal performance, this should be set to 1 MB.
A user has created an Auto Scaling group using CLI. The user wants to enable CloudWatch detailed monitoring for that group. How can the user configure this?
- When the user sets an alarm on the Auto Scaling group, it automatically enables detail monitoring
- By default detailed monitoring is enabled for Auto Scaling
- Auto Scaling does not support detailed monitoring
- Enable detail monitoring from the AWS console
CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. To enable detailed instance monitoring for a new Auto Scaling group, the user does not need to take any extra steps. When the user creates an Auto Scaling launch config as the first step for creating an Auto Scaling group, each launch configuration contains a flag named InstanceMonitoring.Enabled. The default value of this flag is true. Thus, the user does not need to set this flag if he wants detailed monitoring.
A user has created a VPC with a public subnet. The user has terminated all the instances which are part of the subnet. Which of the below mentioned statements is true with respect to this scenario?
- The user cannot delete the VPC since the subnet is not deleted
- All network interface attached with the instances will be deleted
- When the user launches a new instance it cannot use the same subnet
- The subnet to which the instances were launched with will be deleted
A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. A user can create a subnet with VPC and launch instances inside that subnet. When an instance is launched it will have a network interface attached with it. The user cannot delete the subnet until he terminates the instance and deletes the network interface. When the user terminates the instance all the network interfaces attached with it are also deleted.
A user has configured ELB with SSL using a security policy for secure negotiation between the client and load balancer. The ELB security policy supports various ciphers. Which of the below mentioned options helps identify the matching cipher at the client side to the ELB cipher list when client is requesting ELB DNS over SSL?
- Cipher Protocol
- Client Configuration Preference
- Server Order Preference
- Load Balancer Preference
Elastic Load Balancing uses a Secure Socket Layer (SSL. negotiation configuration which is known as a Security Policy. It is used to negotiate the SSL connections between a client and the load balancer. When client is requesting ELB DNS over SSL and if the load balancer is configured to support the Server Order Preference, then the load balancer gets to select the first cipher in its list that matches any one of the ciphers in the client’s list. Server Order Preference ensures that the load balancer determines which cipher is used for the SSL connection.
A user has created a VPC with public and private subnets. The VPC has CIDR 18.104.22.168/16. The private subnet uses CIDR 22.214.171.124/24 and the public subnet uses CIDR 126.96.36.199/24. The user is planning to host a web server in the public subnet (port 80. and a DB server in the private subnet (port 3306). The user is configuring a security group of the NAT instance. Which of the below mentioned entries is not required for the NAT security group?
- For Inbound allow Source: 188.8.131.52/24 on port 80
- For Outbound allow Destination: 0.0.0.0/0 on port 80
- For Inbound allow Source: 184.108.40.206/24 on port 80
- For Outbound allow Destination: 0.0.0.0/0 on port 443
A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet to host the web server and DB server respectively, the user should configure that the instances in the private subnet can connect to the internet using the NAT instances. The user should first configure that NAT can receive traffic on ports 80 and 443 from the private subnet. Thus, allow ports 80 and 443 in Inbound for the private subnet 220.127.116.11/24. Now to route this traffic to the internet configure ports 80 and 443 in Outbound with destination 0.0.0.0/0. The NAT should not have an entry for the public subnet CIDR.
A user has created an application which will be hosted on EC2. The application makes calls to DynamoDB to fetch certain data. The application is using the DynamoDB SDK to connect with from the EC2 instance. Which of the below mentioned statements is true with respect to the best practice for security in this scenario?
- The user should attach an IAM role with DynamoDB access to the EC2 instance
- The user should create an IAM user with DynamoDB access and use its credentials within the application to connect with DynamoDB
- The user should create an IAM role, which has EC2 access so that it will allow deploying the application
- The user should create an IAM user with DynamoDB and EC2 access. Attach the user with the application so that it does not use the root account credentials
With AWS IAM a user is creating an application which runs on an EC2 instance and makes requests to AWS, such as DynamoDB or S3 calls. Here it is recommended that the user should not create an IAM user and pass the user’s credentials to the application or embed those credentials inside the application. Instead, the user should use roles for EC2 and give that role access to DynamoDB /S3. When the roles are attached to EC2, it will give temporary security credentials to the application hosted on that EC2, to connect with DynamoDB / S3.
An organization (Account ID 123412341234) has attached the below mentioned IAM policy to a user. What does this policy statement entitle the user to perform?
- The policy allows the IAM user to modify all IAM user’s credentials using the console, SDK, CLI or APIs
- The policy will give an invalid resource error
- The policy allows the IAM user to modify all credentials using only the console
- The policy allows the user to modify all IAM user’s password, sign in certificates and access keys using only CLI, SDK or APIs
WS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. If the organization (Account ID 123412341234) wants some of their users to manage credentials (access keys, password, and sing in certificates. of all IAM users, they should set an applicable policy to that user or group of users. The below mentioned policy allows the IAM user to modify the credentials of all IAM user’s using only CLI, SDK or APIs. The user cannot use the AWS console for this activity since he does not have list permission for the IAM users.
A sys admin is trying to understand the sticky session algorithm. Please select the correct sequence of steps, both when the cookie is present and when it is not, to help the admin understand the implementation of the sticky session:
1.ELB inserts the cookie in the response
2.ELB chooses the instance based on the load balancing algorithm
3.Check the cookie in the service request
4.The cookie is found in the request
5.The cookie is not found in the request
- 3,1,4,2 [Cookie is not Present] & 3,1,5,2 [Cookie is Present]
- 3,4,1,2 [Cookie is not Present] & 3,5,1,2 [Cookie is Present]
- 3,5,2,1 [Cookie is not Present] & 3,4,2,1 [Cookie is Present]
- 3,2,5,4 [Cookie is not Present] & 3,2,4,5 [Cookie is Present]
Generally, AWS ELB routes each request to a zone with the minimum load. The Elastic Load Balancer provides a feature called sticky session which binds the user’s session with a specific EC2 instance. The load balancer uses a special load-balancer-generated cookie to track the application instance for each request. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the request is sent to the application instance specified in the cookie. If there is no cookie, the load balancer chooses an application instance based on the existing load balancing algorithm. A cookie is inserted into the response for binding subsequent requests from the same user to that application instance.
A user has a weighing plant. The user measures the weight of some goods every 5 minutes and sends data to AWS CloudWatch for monitoring and tracking. Which of the below mentioned parameters is mandatory for the user to include in the request list?
- Metric Name
- Time zone
AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. The user can publish the data to CloudWatch as single data points or as an aggregated set of data points called a statistic set. The user has always to include the namespace as part of the request. The user can supply a file instead of the metric name. If the user does not supply the time zone, it accepts the current time. If the user is sending the data as a single data point it will have parameters, such as value. However, if the user is sending as an aggregate it will have parameters, such as statistic-values.
An organization has configured Auto Scaling for hosting their application. The system admin wants to understand the Auto Scaling health check process. If the instance is unhealthy, Auto Scaling launches an instance and terminates the unhealthy instance. What is the order execution?
- Auto Scaling launches a new instance first and then terminates the unhealthy instance
- Auto Scaling performs the launch and terminate processes in a random order
- Auto Scaling launches and terminates the instances simultaneously
- Auto Scaling terminates the instance first and then launches a new instance
Auto Scaling keeps checking the health of the instances at regular intervals and marks the instance for replacement when it is unhealthy. The ReplaceUnhealthy process terminates instances which are marked as unhealthy and subsequently creates new instances to replace them. This process first terminates the instance and then launches a new instance.
A user is trying to connect to a running EC2 instance using SSH. However, the user gets an Unprotected Private Key File error. Which of the below mentioned options can be a possible reason for rejection?
- The private key file has the wrong file permission
- The ppk file used for SSH is read only
- The public key file has the wrong permission
- The user has provided the wrong user name for the OS login
While doing SSH to an EC2 instance, if you get an Unprotected Private Key File error it means that the private key file’s permissions on your computer are too open. Ideally the private key should have the Unix permission of 0400. To fix that, run the command:
chmod 0400 /path/to/private.key
A user has provisioned 2000 IOPS to the EBS volume. The application hosted on that EBS is experiencing less IOPS than provisioned. Which of the below mentioned options does not affect the IOPS of the volume?
- The application does not have enough IO for the volume
- The instance is EBS optimized
- The EC2 instance has 10 Gigabit Network connectivity
- The volume size is too large
When the application does not experience the expected IOPS or throughput of the PIOPS EBS volume that was provisioned, the possible root cause could be that the EC2 bandwidth is the limiting factor and the instance might not be either EBS-optimized or might not have 10 Gigabit network connectivity. Another possible cause for not experiencing the expected IOPS could also be that the user is not driving enough I/O to the EBS volumes. The size of the volume may not affect IOPS.