Last Updated on October 3, 2021 by Admin 2
SOA-C01 : AWS-SysOps : Part 06
A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type?
Amazon RDS uses the Amazon Simple Notification Service to provide a notification when an Amazon RDS event occurs. Event categories for a snapshot source type include: Creation, Deletion, and Restoration. The Backup is a part of DB instance source type.
A customer is using AWS for Dev and Test. The customer wants to setup the Dev environment with Cloudformation. Which of the below mentioned steps are not required while using Cloudformation?
- Create a stack
- Configure a service
- Create and upload the template
- Provide the parameters configured as part of the template
AWS Cloudformation is an application management tool which provides application modelling, deployment, configuration, management and related activities. AWS CloudFormation introduces two concepts: the template and the stack. The template is a JSON-format, text-based file that describes all the AWS resources required to deploy and run an application. The stack is a collection of AWS resources which are created and managed as a single unit when AWS CloudFormation instantiates a template. While creating a stack, the user uploads the template and provides the data for the parameters if required.
A user has configured the AWS CloudWatch alarm for estimated usage charges in the US East region. Which of the below mentioned statements is not true with respect to the estimated charges?
- It will store the estimated charges data of the last 14 days
- It will include the estimated charges of every AWS service
- The metric data will represent the data of all the regions
- The metric data will show data specific to that region
When the user has enabled the monitoring of estimated charges for the AWS account with AWS CloudWatch, the estimated charges are calculated and sent several times daily to CloudWatch in the form of metric data. This data will be stored for 14 days. The billing metric data is stored in the US East (Northern Virginia. Region and represents worldwide charges. This data also includes the estimated charges for every service in AWS used by the user, as well as the estimated overall AWS charges.
A user is accessing RDS from an application. The user has enabled the Multi AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application?
- RDS will have an internal IP which will redirect all requests to the new DB
- RDS uses DNS to switch over to stand by replica for seamless transition
- The switch over changes Hardware so RDS does not need to worry about access
- RDS will have both the DBs running independently and the user has to manually switch over
In the event of a planned or unplanned outage of a DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if the user has enabled Multi AZ. The automatic failover mechanism simply changes the DNS record of the DB instance to point to the standby DB instance. As a result, the user will need to re-establish any existing connections to the DB instance. However, as the DNS is the same, the application can access DB seamlessly.
An organization is generating digital policy files which are required by the admins for verification. Once the files are verified they may not be required in the future unless there is some compliance issue. If the organization wants to save them in a cost effective way, which is the best possible solution?
- AWS RRS
- AWS S3
- AWS RDS
- AWS Glacier
Amazon S3 stores objects according to their storage class. There are three major storage classes: Standard, Reduced Redundancy and Glacier. Standard is for AWS S3 and provides very high durability. However, the costs are a little higher. Reduced redundancy is for less critical files. Glacier is for archival and the files which are accessed infrequently. It is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup.
A user has launched an EBS backed instance. The user started the instance at 9 AM in the morning. Between 9 AM to 10 AM, the user is testing some script. Thus, he stopped the instance twice and restarted it. In the same hour the user rebooted the instance once. For how many instance hours will AWS charge the user?
- 3 hours
- 4 hours
- 2 hours
- 1 hour
A user can stop/start or reboot an EC2 instance using the AWS console, the Amazon EC2 CLI or the Amazon EC2 API. Rebooting an instance is equivalent to rebooting an operating system. When the instance is rebooted AWS will not charge the user for the extra hours. In case the user stops the instance, AWS does not charge the running cost but charges only the EBS storage cost. If the user starts and stops the instance multiple times in a single hour, AWS will charge the user for every start and stop. In this case, since the instance was rebooted twice, it will cost the user for 3 instance hours.
An organization has configured the custom metric upload with CloudWatch. The organization has given permission to its employees to upload data using CLI as well SDK. How can the user track the calls made to CloudWatch?
- The user can enable logging with CloudWatch which logs all the activities
- Use CloudTrail to monitor the API calls
- Create an IAM user and allow each user to log the data using the S3 bucket
- Enable detailed monitoring with CloudWatch
AWS CloudTrail is a web service which will allow the user to monitor the calls made to the Amazon CloudWatch API for the organization’s account, including calls made by the AWS Management Console, Command Line Interface (CLI., and other services. When CloudTrail logging is turned on, CloudWatch will write log files into the Amazon S3 bucket, which is specified during the CloudTrail configuration.
A user has created a queue named “myqueue” with SQS. There are four messages published to queue which are not received by the consumer yet. If the user tries to delete the queue, what will happen?
- A user can never delete a queue manually. AWS deletes it after 30 days of inactivity on queue
- It will delete the queue
- It will initiate the delete but wait for four days before deleting until all messages are deleted automatically.
- It will ask user to delete the messages first
SQS allows the user to move data between distributed components of applications so they can perform different tasks without losing messages or requiring each component to be always available. The user can delete a queue at any time, whether it is empty or not. It is important to note that queues retain messages for a set period of time. By default, a queue retains messages for four days.
A user has launched a large EBS backed EC2 instance in the US-East-1a region. The user wants to achieve Disaster Recovery (DR. for that instance by creating another small instance in Europe. How can the user achieve DR?
- Copy the running instance using the “Instance Copy” command to the EU region
- Create an AMI of the instance and copy the AMI to the EU region. Then launch the instance from the EU AMI
- Copy the instance from the US East region to the EU region
- Use the “Launch more like this” option to copy the instance from one region to another
To launch an EC2 instance it is required to have an AMI in that region. If the AMI is not available in that region, then create a new AMI or use the copy command to copy the AMI from one region to the other region.
A user has created numerous EBS volumes. What is the general limit for each AWS account for the maximum number of EBS volumes that can be created?
A user can attach multiple EBS volumes to the same instance within the limits specified by his AWS account. Each AWS account has a limit on the number of Amazon EBS volumes that the user can create, and the total storage available. The default limit for the maximum number of volumes that can be created is 5000.
A user has created a VPC with CIDR 22.214.171.124/16 using the wizard. The user has created a public subnet CIDR (126.96.36.199/24. and VPN only subnets CIDR (188.8.131.52/24. along with the VPN gateway (vgw-12345. to connect to the user’s data center. Which of the below mentioned options is a valid entry for the main route table in this scenario?
- Destination: 184.108.40.206/24 and Target: vgw-12345
- Destination: 220.127.116.11/16 and Target: ALL
- Destination: 18.104.22.168/16 and Target: vgw-12345
- Destination: 0.0.0.0/0 and Target: vgw-12345
The main route table came with the VPC, and it also has a route for the VPN-only subnet. A custom route table is associated with the public subnet. The custom route table has a route over the Internet gateway (the destination is 0.0.0.0/0, and the target is the Internet gateway).
If you create a new subnet in this VPC, it’s automatically associated with the main route table, which routes its traffic to the virtual private gateway. If you were to set up the reverse configuration (the main route table with the route to the Internet gateway, and the custom route table with the route to the virtual private gateway), then a new subnet automatically has a route to the Internet gateway.
A user has stored data on an encrypted EBS volume. The user wants to share the data with his friend’s AWS account. How can user achieve this?
- Create an AMI from the volume and share the AMI
- Copy the data to an unencrypted volume and then share
- Take a snapshot and share the snapshot with a friend
- If both the accounts are using the same encryption key then the user can share the volume directly
AWS EBS supports encryption of the volume. It also supports creating volumes from existing snapshots
provided the snapshots are created from encrypted volumes. If the user is having data on an encrypted volume and is trying to share it with others, he has to copy the data from the encrypted volume to a new unencrypted volume. Only then can the user share it as an encrypted volume data. Otherwise the snapshot cannot be shared.
A user has enabled the Multi AZ feature with the MS SQL RDS database server. Which of the below mentioned statements will help the user understand the Multi AZ feature better?
- In a Multi AZ, AWS runs two DBs in parallel and copies the data asynchronously to the replica copy
- In a Multi AZ, AWS runs two DBs in parallel and copies the data synchronously to the replica copy
- In a Multi AZ, AWS runs just one DB but copies the data synchronously to the standby replica
- AWS MS SQL does not support the Multi AZ feature
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. Note that the high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a read replica.
An organization is using cost allocation tags to find the cost distribution of different departments and projects. One of the instances has two separate tags with the key/ value as “InstanceName/HR”, “CostCenter/HR”. What will AWS do in this case?
- InstanceName is a reserved tag for AWS. Thus, AWS will not allow this tag
- AWS will not allow the tags as the value is the same for different keys
- AWS will allow tags but will not show correctly in the cost allocation report due to the same value of the two separate keys
- AWS will allow both the tags and show properly in the cost distribution report
AWS provides cost allocation tags to categorize and track the AWS costs. When the user applies tags to his AWS resources, AWS generates a cost allocation report as a comma-separated value (CSV file. with the usage and costs aggregated by those tags. Each tag will have a key-value and can be applied to services, such as EC2, S3, RDS, EMR, etc. It is required that the key should be different for each tag. The value can be the same for different keys. In this case since the value is different, AWS will properly show the distribution report with the correct values.
A user is publishing custom metrics to CloudWatch. Which of the below mentioned statements will help the user understand the functionality better?
- The user can use the CloudWatch Import tool
- The user should be able to see the data in the console after around 15 minutes
- If the user is uploading the custom data, the user must supply the namespace, timezone, and metric name as part of the command
- The user can view as well as upload data using the console, CLI and APIs
AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. The user has always to include the namespace as a part of the request. However, the other parameters are optional. If the user has uploaded data using CLI, he can view it as a graph inside the console. The data will take around 2 minutes to upload but can be viewed only after around 15 minutes.
A user is launching an EC2 instance in the US East region. Which of the below mentioned options is recommended by AWS with respect to the selection of the availability zone?
- Always select the US-East-1-a zone for HA
- Do not select the AZ; instead let AWS select the AZ
- The user can never select the availability zone while launching an instance
- Always select the AZ while launching an instance
When launching an instance with EC2, AWS recommends not to select the availability zone (AZ). AWS specifies that the default Availability Zone should be accepted. This is because it enables AWS to select the best Availability Zone based on the system health and available capacity. If the user launches additional instances, only then an Availability Zone should be specified. This is to specify the same or different AZ from the running instances.
A user has created a VPC with CIDR 22.214.171.124/16 with only a private subnet and VPN connection using the VPC wizard. The user wants to connect to the instance in a private subnet over SSH. How should the user define the security rule for SSH?
- Allow Inbound traffic on port 22 from the user’s network
- The user has to create an instance in EC2 Classic with an elastic IP and configure the security group of a private subnet to allow SSH from that elastic IP
- The user can connect to a instance in a private subnet using the NAT instance
- Allow Inbound traffic on port 80 and 22 to allow the user to connect to a private subnet over the Internet
The user can create subnets as per the requirement within a VPC. If the user wants to connect VPC from his own data center, the user can setup a case with a VPN only subnet (private. which uses VPN access to connect with his data center. When the user has configured this setup with Wizard, all network connections to the instances in the subnet will come from his data center. The user has to configure the security group of the private subnet which allows the inbound traffic on SSH (port 22. from the data center’s network range.
A user has created an ELB with the availability zone US-East-1.
The user wants to add more zones to ELB to achieve High Availability. How can the user add more zones to the existing ELB?
- It is not possible to add more zones to the existing ELB
- The only option is to launch instances in different zones and add to ELB
- The user should stop the ELB and add zones and instances as required
- The user can add zones on the fly from the AWS console
A user has configured an Auto Scaling group with ELB. The user has enabled detailed CloudWatch monitoring on Elastic Load balancing. Which of the below mentioned statements will help the user understand this functionality better?
- ELB sends data to CloudWatch every minute only and does not charge the user
- ELB will send data every minute and will charge the user extra
- ELB is not supported by CloudWatch
- It is not possible to setup detailed monitoring for ELB
CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed
monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. Elastic Load Balancing includes 10 metrics and 2 dimensions, and sends data to CloudWatch every minute. This does not cost extra.
A user has configured ELB with two EBS backed EC2 instances. The user is trying to understand the DNS access and IP support for ELB. Which of the below mentioned statements may not help the user understand the IP mechanism supported by ELB?
- The client can connect over IPV4 or IPV6 using Dualstack
- ELB DNS supports both IPV4 and IPV6
- Communication between the load balancer and back-end instances is always through IPV4
- The ELB supports either IPV4 or IPV6 but not both
Elastic Load Balancing supports both Internet Protocol version 6 (IPv6. and Internet Protocol version 4 (IPv4.) Clients can connect to the user’s load balancer using either IPv4 or IPv6 (in EC2-Classic. DNS. However, communication between the load balancer and its back-end instances uses only IPv4. The user can use the Dualstack-prefixed DNS name to enable IPv6 support for communications between the client and the load balancers. Thus, the clients are able to access the load balancer using either IPv4 or IPv6 as their individual connectivity needs dictate.