while(motivation <= 0)

Back
AWS Reinvent 2024 personal recap

Reinvent 2024 recap

Sessions Attended:
  1. SVS310: Learn multi-tier application architectures on Amazon ECS
  2. ANT342: Operate and scale managed Apache Kafka and Apache Fink clusters
  3. SVS218: Accelerate Python and .NET lambda functions with SnapStart
  4. DAT307: Gen AI incident detection & response systems with Aurora & Amazon RDS
  5. SUP311: Rapid detection and noise reduction using automation
  6. DAT405: Deep dive into Amazon Aurora and its innovations
  7. OPN310: Running Streamlit applications on AWS
  8. BSI102: What’s new with Amazon QuickSight
  9. NFX305: How Netflix autopilots migration from Amazon RDS to Aurora at scale
  10. FSI315: JPMorganChase: Real-time fraud screening at massive scale

SVS310: Learn multi-tier application architectures on Amazon ECS
- Workshop: my issue with Amazon workshops is that they give you four hours to get through everything and only 2 hours with dedicated workspace. It’s strange that they don’t provide all of the materials outside of the workshop so you can actually try and reproduce the results outside of class. They use a ton of “hacks/cheats” to speed up the process with code generation.

ANT342: Operate and scale managed Apache Kafka and Apache Fink clusters
Takeaway: Use the managed Kafka service and express brokers. Large can do 45Mb/s No disruption on scale up Re-balancing takes minutes 90% faster recovery from failures Fink – dev tools for streaming datastream api session notes

SVS218: Accelerate Python and .NET lambda functions with SnapStart
Allows lambda functions 3x faster start times by caching ram/storage. Apps need to be snapstart aware for maximum speed. Works really well with Python 3.12 and later and .net 8+. https://github.com/aws/snapshot-restore-py https://nuget.org/packages/Amazon.Lambda.Core Need to be careful with unique values so they don’t end up getting cached by snapstart and reused. Be mindful of DNS caching, pulling creds too soon, network connections, ephemeral data. $3.9 per GB and $1.4 per gb restored with 10k restores session notes

DAT307: Gen AI incident detection & response systems with Aurora & Amazon RDS
This was a workshop where again they were using a lot of command line hacks and there wasn’t nearly enough time to get through it.

SUP311: Rapid detection and noise reduction using automation
Use automation if you don’t want to drown in incident response at scale. I walked out session notes

DAT405: Deep dive into Amazon Aurora and its innovations
Very cool session. I was like “a kid in a candy store” listening to the presenter talk about how Aurora works under the hood. Not an empty seat in the session. I didn’t take notes, only captured slides. Very cool. presentation on youtube

OPN310: Running Streamlit applications on AWS
Streamlit is very slick for presenting data Check out the “AWS Samples” github ALB Auth is the fastest way to get authentication into Streamlit Any time I walked into a session and saw a whiteboard, I knew I was in trouble. Smart people have a habit of being lazy when it comes to preparing for speaking engagements. What ends up happening is a random flow of presentation and time wasted waiting for them to draw visuals on the whiteboard. session notes

BSI102: What’s new with Amazon QuickSight
Very slick demo of QuickSight Will carefully consider Quicksight vs Streamlit for presenting my data at work. QuickSight can do shareable dashboards via link 20GB of “spice” is the default. QuickSight looks cool but the pricing strikes me as sketchy. That is just my opinion. It can now do AI based generated dashboards and reports. They didn’t get into what percent of data/results is hallucinated by the AI. The demo had a lot of wizbang, but not how they set up all the inputs to get the results. Can use word docs as source for AI analysis which is slick. They claim 9-10x productivity gains by data analysts. In the demo they did real time analysis of data using AI. Pricing $50/user/mo for pro. Read only was $3/user/mo https://democentral.learnquicksight.online https://aws.amazon.com/quicksight/q/ session notes youtube

NFX305: How Netflix autopilots migration from Amazon RDS to Aurora at scale
This was half Q/A with the audience and they used a whiteboard. They get a pass because they didn’t work for Amazon and everyone was curious to hear about what was under the hood at Netflix. Netflix has something like 2k database clusters in various amounts based on all the popular tech. This talk was focused on upgrading aurora postgresql. They used a CI/CD tool for “customer” self service and automated rollback of postgress updates. I think they said the tool they used was spinnaker.io https://spinnakes.io/docs/setup/other_config/ci/ Sounded like they said “spincut” or something like that, I need to email them and ask the speakers for clarification. The secret sauce for their “customer” based self upgrade service was a custom data access layer that all the customers implemented that allows Netflix to swap out database dns on the fly without “customer” intervention. session notes

FSI315: JPMorganChase: Real-time fraud screening at massive scale
This sessions name wasn’t quite what the content of the session really was but I liked it all the same. It was more about the AWS infrastructure that chase used for their massive fraud detection system. The takeaway is they use many layers of AWS services in orchestra together to auto scale and make it possible for their analysis to add new checks / features without involving the engineering teams. session notes