Welcome to Day 9 of our exciting "30 Days of AWS" journey!
If you've been following along from the beginning, kudos to you for getting into the world of Amazon Web Services. Your dedication and curiosity are truly commendable.
For those who might have just joined us or are specifically interested in today's topic, a warm welcome to you as well! While each article in this series delves into a different facet of AWS, rest assured that they are all interconnected, building upon the knowledge we've been cultivating day by day.
If you're here for the first time, I encourage you to take a moment to catch up on our previous discussions. This will not only enhance your understanding but also ensure a seamless flow as we dive deeper into the fascinating journey of AWS with me.
In today's installment, we're going to explore "Monitoring and Logging with CloudWatch". As always, feel free to engage, ask questions, and share your thoughts in the comments. Your participation is what makes this series vibrant and valuable. I'm thrilled to have you join us on this journey. Let's get started!
What is AWS CloudWatch?
️AWS CloudWatch is like your personal detective, constantly watching over your AWS resources, collecting data, and providing insights into their performance and health.
Fun Fact #1: CloudWatch, the Guardian Angel
CloudWatch can monitor practically anything in AWS, from EC2 instances to databases and Lambda functions. It's like having a guardian angel for your cloud infrastructure, ready to alert you if anything goes awry.
Setting Up Your Detective: CloudWatch Alarms
Now, let's talk about setting up alarms to ensure CloudWatch alerts you when things go south. Imagine you're a chef, and you're cooking a delicious meal. You don't want your stove to catch fire without you knowing, right? That's where CloudWatch alarms come in.
Setting Up Alarms
Step 1: Access CloudWatch
Go to your AWS Management Console, type in "CloudWatch" in the search bar and click on CloudWatch
The default home page of "CloudWatch" will appear something like this
Step 2: Create an Alarm
In the CloudWatch dashboard, click on "In alarm" from the "Alarms" section in the left sidebar.
Click the "Create Alarm" button.
Step 3: Select a Metric
Choose a metric to monitor. For example, you can select "EC2" under the "Browse" tab and then choose a metric like CPUUtilization.
Define conditions. For instance, you might want to trigger an alarm when CPU utilization exceeds 80%.
Step 4: Configure Actions
Set up actions to be taken when the alarm state is triggered. You can choose to send an email, or SMS, or even trigger an AWS Lambda function to respond to the alarm.
Configure the name and description of your alarm for easy identification.
Step 5: Review and Create
Double-check your settings, and if everything looks good, click "Create Alarm."
Aaaaah! Your alarm is set up.
The Cloud Chef:
Think of CloudWatch alarms as your smoke detectors in the kitchen. When the smoke from an overcooked dish sets off the alarm, you know it's time to act and prevent a culinary disaster. Similarly, CloudWatch alarms notify you when something's off with your AWS resources, giving you a chance to react before things get out of hand.
Fun Fact #2: CloudWatch Logs
But CloudWatch isn't just about monitoring. It's also a master logger. It can capture logs from your applications and resources. It's like having a diary for your cloud, recording every significant event.
Getting Started with CloudWatch Logs
Let's take a look at how to set up CloudWatch Logs:
Step 1: Create a Log Group
In the CloudWatch dashboard, click on the "Log groups" in the "Logs" section in the left sidebar.
Click the "Create log group" button.
Give your log group a name and click "Create log group."
Step 2: Create a Log Stream
Inside your log group, click the "Create log stream" button.
Name your log stream and click "Create log stream."
Step 3: Publish Logs
Now, it's time to publish logs to your log stream. Depending on your application, you can use various AWS SDKs or agents to send logs to CloudWatch Logs.
We'll use Python to publish logs to your CloudWatch Logs log stream. Make sure you have the AWS SDK for Python (Boto3) installed. Also, you need to install the AWS CLI and the AWS account configured. If you have any issues setting up post your queries in the comments I'll be happy to assist.
import boto3
import time
# Create a CloudWatch Logs client
cloudwatch_logs = boto3.client('logs')
# Define the log group name and log stream name
log_group_name = 'knowmam-logs'
log_stream_name = 'kms-log-stream'
# Your log message to be published
log_message = 'This is a sample log message from Python Boto3.'
# Publish the log message to the specified log group and log stream
cloudwatch_logs.put_log_events(
logGroupName=log_group_name,
logStreamName=log_stream_name,
logEvents=[
{
'timestamp': int(round(time.time() * 1000)), # Use the current timestamp
'message': log_message
}
]
)
print(f"Published log message: '{log_message}' to '{log_group_name}/{log_stream_name}'")
Execute this "Python" Code from your local machine where you configure AWS CLI.
python filename.py - Windows
python3 filename.py - Linux | Mac
Now head over to CloudWatch Log Stream where you created logs.
We did it!!!
Step 4: Analyze Logs
Once your logs are flowing into CloudWatch, you can use CloudWatch Logs Insights to search, analyze, and visualize your log data. It's like having a magnifying glass for your diary entries.
Hope you find this blog helpful. Please share your thoughts in the comments it will help me to refine and provide more insightful content. Happy Learning!
If you want to learn more about "CloudWatch" by video tutorial, you can check out my brother Abhishek Veeramalla made wonderful video. Click here https://www.youtube.com/watch?v=u4XngwbY-O0&list=PLdpzxOOAlwvLNOxX0RfndiYSt1Le9azze&index=18&pp=iAQB