Last Updated on October 3, 2021 by Admin 2
SOA-C01 : AWS-SysOps : Part 14
A storage admin wants to encrypt all the objects stored in S3 using server side encryption. The user does not want to use the AES 256 encryption key provided by S3. How can the user achieve this?
- The admin should upload his secret key to the AWS console and let S3 decrypt the objects
- The admin should use CLI or API to upload the encryption key to the S3 bucket. When making a call to the S3 API mention the encryption key URL in each request
- S3 does not support client supplied encryption keys for server side encryption
- The admin should send the keys and encryption algorithm with each API call
AWS S3 supports client side or server side encryption to encrypt all data at rest. The server side encryption can either have the S3 supplied AES-256 encryption key or the user can send the key along with each API call to supply his own encryption key. Amazon S3 never stores the user’s encryption key. The user has to supply it for each encryption or decryption call.
A user is trying to create a PIOPS EBS volume with 8 GB size and 200 IOPS. Will AWS create the volume?
- Yes, since the ratio between EBS and IOPS is less than 30
- No, since the PIOPS and EBS size ratio is less than 30
- No, the EBS size is less than 10 GB
- Yes, since PIOPS is higher than 100
A user has scheduled the maintenance window of an RDS DB on Monday at 3 AM. Which of the below mentioned events may force to take the DB instance offline during the maintenance window?
- Enabling Read Replica
- Making the DB Multi AZ
- DB password change
- Security patching
Amazon RDS performs maintenance on the DB instance during a user-definable maintenance window. The system may be offline or experience lower performance during that window. The only maintenance events that may require RDS to make the DB instance offline are:
Scaling compute operations
Software patching. Required software patching is automatically scheduled only for patches that are security and durability related. Such patching occurs infrequently (typically once every few months) and seldom requires more than a fraction of the maintenance window.
An organization has launched 5 instances: 2 for production and 3 for testing. The organization wants that one particular group of IAM users should only access the test instances and not the production ones. How can the organization set that as a part of the policy?
- Launch the test and production instances in separate regions and allow region wise access to the group
- Define the IAM policy which allows access based on the instance ID
- Create an IAM policy with a condition which allows access to only small instances
- Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags
AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. The user can add conditions as a part of the IAM policies. The condition can be set on AWS Tags, Time, and Client IP as well as on various parameters. If the organization wants the user to access only specific instances he should define proper tags and add to the IAM policy condition. The sample policy is shown below.
A user has configured Auto Scaling with the minimum capacity as 2 and the desired capacity as 2. The user is trying to terminate one of the existing instance with the command:
What will Auto Scaling do in this scenario?
- Terminates the instance and does not launch a new instance
- Terminates the instance and updates the desired capacity to 1
- Terminates the instance and updates the desired capacity and minimum size to 1
- Throws an error
The Auto Scaling command as-terminate-instance-in-auto-scaling-group <Instance ID> will terminate the specific instance ID. The user is required to specify the parameter as –decrement-desired-capacity. Then Auto Scaling will terminate the instance and decrease the desired capacity by 1. In this case since the minimum size is 2, Auto Scaling will not allow the desired capacity to go below 2. Thus, it will throw an error.
A user is collecting 1000 records per second. The user wants to send the data to CloudWatch using the custom namespace. Which of the below mentioned options is recommended for this activity?
- Aggregate the data with statistics, such as Min, max, Average, Sum and Sample data and send the data to CloudWatch
- Send all the data values to CloudWatch in a single command by separating them with a comma. CloudWatch will parse automatically
- Create one csv file of all the data and send a single file to CloudWatch
- It is not possible to send all the data in one call. Thus, it should be sent one by one. CloudWatch will aggregate the data automatically
AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. The user can publish data to CloudWatch as single data points or as an aggregated set of data points called a statistic set using the command put-metric-data. It is recommended that when the user is having multiple data points per minute, he should aggregate the data so that it will minimize the number of calls to put-metric-data. In this case it will be single call to CloudWatch instead of 1000 calls if the data is aggregated.
A user is trying to create an EBS volume with the highest PIOPS supported by EBS. What is the minimum size of EBS required to have the maximum IOPS?
A provisioned IOPS EBS volume can range in size from 10 GB to 1 TB and the user can provision up to 4000 IOPS per volume. The ratio of IOPS provisioned to the volume size requested should be a maximum of 30.
An organization is trying to create various IAM users. Which of the below mentioned options is not a valid IAM username?
AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. Whenever the organization is creating an IAM user, there should be a unique ID for each user. The names of users, groups, roles, instance profiles must be alphanumeric, including the following common characters:
plus (+., equal (=., comma (,., period (.., at (@., and dash (-..
A user is having data generated randomly based on a certain event. The user wants to upload that data to CloudWatch. It may happen that event may not have data generated for some period due to randomness. Which of the below mentioned options is a recommended option for this case?
- For the period when there is no data, the user should not send the data at all
- For the period when there is no data the user should send a blank value
- For the period when there is no data the user should send the value as 0
- The user must upload the data to CloudWatch as having no data for some period will cause an error at CloudWatch monitoring
AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. When the user data is more random and not generated at regular intervals, there can be a period which has no associated data. The user can either publish the zero (0. Value for that period or not publish the data at all. It is recommended that the user should publish zero instead of no value to monitor the health of the application. This is helpful in an alarm as well as in the generation of the sample data count.
A user is sending the data to CloudWatch using the CloudWatch API. The user is sending data 90 minutes in the future. What will CloudWatch do in this case?
- CloudWatch will accept the data
- It is not possible to send data of the future
- It is not possible to send the data manually to CloudWatch
- The user cannot send data for more than 60 minutes in the future
With Amazon CloudWatch, each metric data point must be marked with a time stamp. The user can send the data using CLI but the time has to be in the UTC format. If the user does not provide the time, CloudWatch will take the data received time in the UTC timezone. The time stamp sent by the user can be up to two weeks in the past and up to two hours into the future.
A user wants to upload a complete folder to AWS S3 using the S3 Management console. How can the user perform this activity?
- Just drag and drop the folder using the flash tool provided by S3
- Use the Enable Enhanced Folder option from the S3 console while uploading objects
- The user cannot upload the whole folder in one go with the S3 management console
- Use the Enable Enhanced Uploader option from the S3 console while uploading objects
AWS S3 provides a console to upload objects to a bucket. The user can use the file upload screen to upload the whole folder in one go by clicking on the Enable Enhanced Uploader option. When the user uploads a folder, Amazon S3 uploads all the files and subfolders from the specified folder to the user’s bucket. It then assigns a key value that is a combination of the uploaded file name and the folder name.
Which of the below mentioned AWS RDS logs cannot be viewed from the console for MySQL?
- Error Log
- Slow Query Log
- Transaction Log
- General Log
The user can view, download, and watch the database logs using the Amazon RDS console, the Command Line Interface (CLI., or the Amazon RDS API. For the MySQL RDS, the user can view the error log, slow query log, and general logs. RDS does not support viewing the transaction logs.
A user has launched an EBS backed EC2 instance in the US-East-1a region. The user stopped the instance and started it back after 20 days. AWS throws up an ‘InsufficientInstanceCapacity’ error. What can be the possible reason for this?
- AWS does not have sufficient capacity in that availability zone
- AWS zone mapping is changed for that user account
- There is some issue with the host capacity on which the instance is launched
- The user account has reached the maximum EC2 instance limit
When the user gets an ‘InsufficientInstanceCapacity’ error while launching or starting an EC2 instance, it means that AWS does not currently have enough available capacity to service the user request. If the user is requesting a large number of instances, there might not be enough server capacity to host them. The user can either try again later, by specifying a smaller number of instances or changing the availability zone if launching a fresh instance.
A user has created a VPC with public and private subnets using the VPC wizard. Which of the below mentioned statements is true in this scenario?
- The AWS VPC will automatically create a NAT instance with the micro size
- VPC bounds the main route table with a private subnet and a custom route table with a public subnet
- The user has to manually create a NAT instance
- VPC bounds the main route table with a public subnet and a custom route table with a private subnet
A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet, the instances in the public subnet can receive inbound traffic directly from the internet, whereas the instances in the private subnet cannot. If these subnets are created with Wizard, AWS will create a NAT instance of a smaller or higher size, respectively. The VPC has an implied router and the VPC wizard updates the main route table used with the private subnet, creates a custom route table and associates it with the public subnet.
The CFO of a company wants to allow one of his employees to view only the AWS usage report page. Which of the below mentioned IAM policy statements allows the user to have access to the AWS usage report page?
- “Effect”: “Allow”, “Action”: [“Describe”], “Resource”: “Billing”
- “Effect”: “Allow”, “Action”: [“AccountUsage], “Resource”: “*”
- “Effect”: “Allow”, “Action”: [“aws-portal:ViewUsage”], “Resource”: “*”
- “Effect”: “Allow”, “Action”: [“aws-portal: ViewBilling”], “Resource”: “*”
AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. If the CFO wants to allow only AWS usage report page access, the policy for that IAM user will be as given below:
An organization has created 10 IAM users. The organization wants each of the IAM users to have access to a separate DynamoDB table. All the users are added to the same group and the organization wants to setup a group level policy for this. How can the organization achieve this?
- Define the group policy and add a condition which allows the access based on the IAM name
- Create a DynamoDB table with the same name as the IAM user name and define the policy rule which grants access based on the DynamoDB ARN using a variable
- Create a separate DynamoDB database for each user and configure a policy in the group based on the DB variable
- It is not possible to have a group level policy which allows different IAM users to different DynamoDB Tables
Explanation:AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. AWS DynamoDB has only tables and the organization cannot make separate databases. The organization should create a table with the same name as the IAM user name and use the ARN of DynamoDB as part of the group policy. The sample policy is shown below:
A user has configured an HTTPS listener on an ELB. The user has not configured any security policy which can help to negotiate SSL between the client and ELB. What will ELB do in this scenario?
- By default, ELB will select the first version of the security policy
- By default, ELB will select the latest version of the policy
- ELB creation will fail without a security policy
- It is not required to have a security policy since SSL is already installed
Elastic Load Balancing uses a Secure Socket Layer (SSL. negotiation configuration which is known as a Security Policy. It is used to negotiate the SSL connections between a client and the load balancer. If the user has created an HTTPS/SSL listener without associating any security policy, Elastic Load Balancing will, by default, associate the latest version of the ELBSecurityPolicy-YYYY-MM with the load balancer.
A user is creating a Cloudformation stack. Which of the below mentioned limitations does not hold true for Cloudformation?
- One account by default is limited to 100 templates
- The user can use 60 parameters and 60 outputs in a single template
- The template, parameter, output, and resource description fields are limited to 4096 characters
- One account by default is limited to 20 stacks
AWS Cloudformation is an application management tool which provides application modelling, deployment, configuration, management and related activities. The limitations given below apply to the Cloudformation template and stack. There are no limits to the number of templates but each AWS CloudFormation account is limited to a maximum of 20 stacks by default. The Template, Parameter, Output, and Resource description fields are limited to 4096 characters. The user can include up to 60 parameters and 60 outputs in a template.
A user has two EC2 instances running in two separate regions. The user is running an internal memory management tool, which captures the data and sends it to CloudWatch in US East, using a CLI with the same namespace and metric. Which of the below mentioned options is true with respect to the above statement?
- The setup will not work as CloudWatch cannot receive data across regions
- CloudWatch will receive and aggregate the data based on the namespace and metric
- CloudWatch will give an error since the data will conflict due to two sources
- CloudWatch will take the data of the server, which sends the data first
Amazon CloudWatch does not differentiate the source of a metric when receiving custom data. If the user is publishing a metric with the same namespace and dimensions from different sources, CloudWatch will treat them as a single metric. If the data is coming with the same timezone within a minute, CloudWatch will aggregate the data. It treats these as a single metric, allowing the user to get the statistics, such as minimum, maximum, average, and the sum of all across all servers.
An organization has created a Queue named “modularqueue” with SQS. The organization is not performing any operations such as SendMessage, ReceiveMessage, DeleteMessage, GetQueueAttributes, SetQueueAttributes, AddPermission, and RemovePermission on the queue. What can happen in this scenario?
- AWS SQS sends notification after 15 days for inactivity on queue
- AWS SQS can delete queue after 30 days without notification
- AWS SQS marks queue inactive after 30 days
- AWS SQS notifies the user after 2 weeks and deletes the queue after 3 weeks.
Amazon SQS can delete a queue without notification if one of the following actions hasn’t been performed on it for 30 consecutive days: SendMessage, ReceiveMessage, DeleteMessage, GetQueueAttributes, SetQueueAttributes, AddPermission, and RemovePermission.