Notes from this video . What is it? SQS - S imple Q ueue S ervice, one of the first AWS services from 2006 It is async message based communication, opposed to API calls Scalable, available, managed and cheap Core concepts Queue of json messages Message is 256kb kb at max Publisher/producer puts a message in queue (enque) Processor/consumer get a message from queue (deque) Queue has a name Consumer periodically polls the queue So it is 2 way communication, opposed to an API calls As soon as consumer get a message it is not visible in the queue In case the message is processed it is completely deleted from the queue Many threads/processes can poll a queue at once (it is done behind the scene and makes it scalable) Only a single thread/process can process a msg at ones Long polling is preferable, instead of multiple short ones Cross account publishing / processing is possible D ead L etter Q ueues (DLQ) help to store failed msgs for later processing (may cut msg to DLQ after some retries) Flow Message is published into the queue Consumer sets a lock on it and visibility timeout countdown starts No one see this message anymore during this timeout If message successfully processed, it is deleted from the queue Consumer may fail to process the message (for ex. he needs to call a db and do smth) Ones the timeout is expired, the message is put back into the queue to be retried later SQS vs API Consumer can choose the rate of processing (5 msg/sec for ex) Make two services separate from each other Guaranteed eventual processing (good for non-realtime apps) Services decouple (if consumer failed during bad deployment, publisher still sends msgs, which are not lost and can be processed later) SQS itself never goes down, it is very reliable service Standard vs FIFO queues Standard queue Order of msgs processing is not guarantied (best effort ordering) At least ones msg delivery - small chance that the same msg can be processed several times Unlimited publish and consume rate FIFO F irs I n F irst O ut ordering - order of msgs processing is guarantied Exactly once processing 300 transactions per second max or 3000 with batching approx. 25% more expensive Can group messages and process them separately Common patterns Fanout - utilize AWS SNS notification service. Publisher does not put msg directly into the queue, but into SNS. SNS then distribute the msg to different linked queues. So you can send one msg to different consumers. Serverless processing with backpressure control. Use SQS queue with AWS Lambda processor (very useful). Job buffering. For ex. make kind of cron job with AWS Cloudwatch , which publish msgs into a queue. Processor can be AWS EC2 for big tasks or AWS Lambda for smaller ones. Configuration with aws ui Login and go to SQS In the top left corder click a create queue button Choose type and create a name for the queue. Can not be modified later. Set visibility timeout - ones a consumer takes the msg from the queue, other consumers do not see this msg during timeout (30s), msg becomes invisible for them. If it will not be processed during the timeout, it will become visible back. Set message retention period - time during which a msg stays in the queue (4d) Set delivery delay - msg can be put in a queue, but may not be visible for some duration (usually not used) Set maximum message size - self explanatory Set receive message wait time - configuration for long-polling. In case a msg is not available, connection will be opened and wait for 0...20s to give away a new msg. In case wait time is not set a consumer may ask repeatedly for a msg over and over again. Good for cost reduction. Set to 10s. Enable (or not) d ead- l etter q ueue - if msg was failed to be processed, it will be sent to the 2nd queue. 1st time a msg is failed to be processed, it is returned into the queue after visibility timeout elapsed. Then it may happen 2nd, 3rd... times. After several attempts a msg can be put into a DLQ. If our main queue is called "demo", then convention for DLQ is "demo-dlq". DLQ can be processed at a later times, we can set logs, alarms, emails for failures. You can specify the number of msg process fails after it goes to dlq at maximum receives box. Every queue has a unique id ARN and it can be referenced in Lambda consumer. Important queue tabs are available for the queue: SNS subscriptions, lambda triggers, dead-letter queue, monitoring, tagging etc... After queue is created they can be found here Inside the queue we can test it with Send and receive messages button Provide a body in json format, push Send message button. Bellow we can retrieve a message by Poll for messages Lambda as a queue consumer Go to AWS lambdas and click on Create function Use a blueprint and search for the sqs template Give it a name, for ex. sqsDemoHandler Create a new role from AWS policy template, give it a name (sqsRole), choose Amazon SQS poller permissions role from the policy templates. Choose the SQS trigger ARN for our demo queue arn:aws:sqs:eu-west-1:360117275238:demo Batch size (10) is the number of messages the function will read at ones. Higher number leads to more cost. At the edn we can see the body of the lambda function, which just logs the message out. console.log('Loading function'); exports.handler = async (event) => { //console.log('Received event:', JSON.stringify(event, null, 2)) for (const { messageId, body } of event.Records) { console.log('SQS message %s: %j', messageId, body) } return `Successfully processed ${event.Records.length} messages.` } Hit the create button and let it some time to spin up Then go to the queue and send some messages Go to lambda and check monitor tab to see if lambda has been invoked Go to AWS Cloudwatch to check console logged messages.