while(motivation <= 0)

Back
New year new personal development
A week of personal development. I started the week working to implement ip6 across my aws vpc. With some help from Amazon Q I was able to create some new ip6 friendly subsets and dipped my toe into getting rid of my ip4 public ip addresses. To get it working on my existing network:
  • enable ip6 on the vpc
  • calculate ip6 sub-net masks
  • create new sub-nets that support ip 6
  • create an “egress only gateway (ip6 only)”
  • Add a route to allow internet traffic out through the new egress gateway.

What I quickly discovered after removing my public ip4 addresses in a vpc/subnet with no Nat-gateways (not nat gateway to save $) was that a lot of AWS services do not yet support ip6. The one that was my biggest issue was the elastic container registry.
My next project over break was to try containerize all of my old websites that were being hosted on my project box. The process of getting apache / php and my dependency chain for my project websites ended up taking several days. Much to my surprise, the tool that ended up being very useful was Amazon Q. It provided to be more useful than ChatGPT when it came to getting apache, PHP, and my dependency chain to cooperate.
Once I got my old websites containerized I set about going after some nice to haves. I set up an auto dns/tag updater so that my spawned instances would follow the pattern that would make them easy to manage. This would also allow me to eventually have totally dynamic hosting for my ECS tasks, clusters, and all of my docker needs.

#!/bin/bash
version=$(date +"%Y%m%d%H%M%S")
docker build --tag blog.myname.dev:$version .
success=$(docker images | grep -w $version)
if [ -z "$success" ]; then
    echo "build failed"
    exit 1
fi
imageid=$(docker image ls | grep -w "blog.myname.dev" | awk '{print $3}')
#echo "new imageid: $imageid"
lastimage=$(head -1 <<< $imageid)
old_containers=$(docker ps -a | grep -w "blog.myname.dev"| grep -w Exited | grep -E "months|weeks|days|hours" | awk '{print $1}')
while IFS= read -r instance
do
        docker container rm "$instance"
done <<< "$old_containers"
echo "cleaning up old images"
while IFS= read -r image; do
    if [ "$image" != "$lastimage" ]; then
        echo "removing image: $image"
        docker rmi $image
    fi
done <<< "$imageid"
echo "last imageid: $lastimage"
created=$(docker images | grep -w $lastimage | awk '{print $4}')
echo "created: $created"


#!/bin/bash
img=$( docker images | grep  blog.myname.dev | head -1 | awk '{print $3}')
echo "running image $img"
docker run  -p 80:80 \
--volume /home/colin/projectbox/var/www/blog/blogdata:/var/www/blog/blogdata \
--volume /home/colin/projectbox/var/www/blog/logs:/var/log \
-e AWS_REGION=US-EAST-1 -td "$img"
echo "waiting for container to start"
sleep 5
contid=$(docker ps -a | grep "$img" | awk '{print $1}')
echo "container id is $contid"
status=$(docker ps -a | grep "$contid" | awk '{print $7}')
echo "container status is $status"
if [ "$status" != "Up" ]; then
    echo "container failed to start"
    docker logs "$contid"
    echo "removing container"
    docker rm "$contid"
fi


#!/bin/bash
ZONE_ID="myzoneid"
PATTERN="internal.cmh.sh."

# Get all records and filter for internal.cmh.sh entries
dnslist=$(aws route53 list-resource-record-sets \
    --hosted-zone-id "$ZONE_ID" \
    --query "ResourceRecordSets[?ends_with(Name, '$PATTERN')].[ResourceRecords[].Value | [0], Name]" \
    --profile vacuum \
    --output text | sed 's/\.$//')

instancelist=$(aws ec2 describe-instances \
    --filters "Name=tag:Name,Values=*.internal.cmh.sh" "Name=instance-state-name,Values=running" \
    --query 'Reservations[].Instances[].[LaunchTime,PrivateIpAddress,Tags[?Key==`Name`].Value | [0]]' \
    --output text --profile vacuum --region us-east-1 | sort)

defaultinstances=$(aws ec2 describe-instances \
    --filters "Name=tag:Name,Values=Vacuum-Server" "Name=instance-state-name,Values=running" \
    --query 'Reservations[].Instances[].[LaunchTime,PrivateIpAddress,Tags[?Key==`Name`].Value | [0]]' \
    --output text --profile vacuum --region us-east-1 | sort)

instanceCount=$(wc -l <<< "$instancelist")

echo "dns list: $dnslist"
echo "instance list: $instancelist"

bash update_dns_loop.sh "$instancelist" $instanceCount
bash update_dns_loop.sh "$defaultinstances" $instanceCount


#!/bin/bash
if [ $# -ne 2 ]; then
    echo "Usage:  "
    echo "data format:   "
    exit 1
fi
echo $1 | while read -r launchTime privateIp name; do
    echo "Checking $name ($privateIp)"
    if grep -q "$name" <<< "$dnslist"; then
        echo "  DNS record exists"
        if grep -q "$privateIp" <<< "$dnslist"; then
            echo "    IP address matches"
        else
            echo "    IP address does not match"
            sh update_internal_dns.sh $name $privateIp
        fi
    else
        echo "  DNS record does not exist"
        if [ $name == "Vacuum-Server" ]; then
            #will not work if more than one instance was spun up.
            name="vacuumhost$2.internal.cmh.sh"
            sh update-ec2-name-from-priv-ip.sh $privateIp $name
        fi
        sh update_internal_dns.sh $name $privateIp
    fi
done


#!/bin/bash
if [ $# -ne 2 ]; then
    echo "Usage: $0  "
    exit 1
fi
name=$1
privateIp=$2
aws route53 change-resource-record-sets \
            --hosted-zone-id myzoneid \
            --change-batch '{
                "Changes": [
                {
                    "Action": "UPSERT",
                    "ResourceRecordSet": {
                    "Name": "'"$name"'",
                    "Type": "A",
                    "TTL": 300,
                    "ResourceRecords": [
                        {
                        "Value": "'"$privateIp"'"
                        }
                    ]
                    }
                }
                ]
            }' --profile vacuum  --output text 



#!/bin/bash
#!/bin/bash
if [ $# -ne 2 ]; then
    echo "Usage: $0  "
    exit 1
fi

PRIVATE_IP=$1
NEW_NAME=$2

INSTANCE_ID=$(aws ec2 describe-instances \
    --filters "Name=private-ip-address,Values=$PRIVATE_IP" \
    --query "Reservations[].Instances[].InstanceId" \
    --output text --profile vacuum --region us-east-1)

if [ -z "$INSTANCE_ID" ]; then
    echo "No instance found with private IP $PRIVATE_IP"
    exit 1
fi

aws ec2 create-tags \
    --resources $INSTANCE_ID \
    --tags "Key=Name,Value=$NEW_NAME" \
    --profile vacuum --region us-east-1

if [ $? -eq 0 ]; then
    echo "Successfully updated tag for instance $INSTANCE_ID"
else
    echo "Failed to update tag"
fi


AWS Auto Scaling Groups
Today my focus was on starting work on a dynamically built replacement for my project box. Eleven iterations later and I had an auto scalable box which can host vacuum-flask, connect to efs, and talk to redis to maintain session state. After spending most of the day on that, it occurred to me that I might be able to get away with using an “ECS optimized instance” with my newly found user-data experience and sneak vacuum flask in and still have capacity to spare for some cheap ecs tasks.

#!/bin/bash
sudo yum update
sudo yum upgrade -y
#Install EFS utils
sudo yum install -y amazon-efs-utils
#sudo apt-get -y install git binutils rustc cargo pkg-config libssl-dev gettext
#git clone https://github.com/aws/efs-utils
#cd efs-utils
#./build-deb.sh
#sudo apt-get -y install ./build/amazon-efs-utils*deb
sudo yum -y install docker
sudo systemctl enable docker
sudo systemctl start docker
#sudo yum -y install boto3
if [ ! -d /media/vacuum-data ]; then
  sudo mkdir /media/vacuum-data
fi
echo "fs-05863c9e54e7cdfa4:/ /media/vacuum-data efs _netdev,noresvport,tls,iam 0 0" >> /etc/fstab
#sudo systemctl daemon-reload
sudo mount -a

#docker start redis
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 631538352062.dkr.ecr.us-east-1.amazonaws.com
sudo docker pull 631538352062.dkr.ecr.us-east-1.amazonaws.com/cmh.sh:vacuumflask
sudo sh /media/vacuum-data/run.sh

#ECS related
if [ -d /etc/ecs ]; then
  echo "ECS_CLUSTER=vacuumflask_workers" > /etc/ecs/ecs.config
  echo "ECS_BACKEND_HOST=" >> /etc/ecs/ecs.config
  #TODO: register with the alb?
fi

Post reinvent adventures
Since Re:Invent,
I’ve spent a few weekends playing around with AWS ALBs and AWS ECS. The ALB I got working after a while messing with the security groups and eventually I found where I needed to set the permissions to allow the ALB to log to s3. It turns out the permissions are in the s3 bucket policy and you have to allow an aws account access to your bucket to write the logs. With ECS, I’ve ran into a number of issues trying to get my blog into ECS with sufficient permissions to do the things it needed to do. What’s interesting about the ECS interface is that the best way to use it is with JSON from the command line. This has some inherit issues though because it requires you to put a lot of specific information in your JSON files for your environment. Ideally if you’re checking in your source code you wouldn’t be hard coding secrets in your source. After I got the basic environment working I moved all of my secrets out of environmental variables into secrets manager where they should have been to begin with. Along the way I have learned a lot more about containers and working with environmental variables and debugging in both containers and on local environments. The basic steps to get a container running in ecs:
  • get the image uploaded to a container repo
  • permissions / ports
    1. ecs task permissions
    2. ecs execution permissions
    3. security group access to the subnets and ports in play
  • create your task definition
  • create your service definition