The Anxious Generation Book Notes

Our school district recently hosted a book club discussion on “The Anxious Generation” by Jonathan Haidt, and it was an eye-opening experience for me and many other parents. The book dives deep into how smartphones and social media have significantly impacted the mental health of today’s teens and tweens.

Haidt presents four key proposals to address this crisis:

  1. Delay smartphones until high school (allowing “dumb phones” or smartwatches instead)

  2. Implement phone-free school environments

  3. Restrict social media use until age 16

  4. Encourage more free play to build responsibility and independence

Here are some of my favorite quotes grouped by category

Social Media

While the reward-seeking parts of the brain mature earlier, the frontal cortex-essential for self-control, delay of gratification, and resistance to temptation-is not up to full capacity until the mid 20s, and preteens are at a particularly vulnerable point in development. As they begin puberty, they are often socially insecure, easily swayed by peer pressure, and easily lured by any activity that seems to offer social validation

A fourth trend began just a few years later, and it hit girls much harder than boys: the increased prevalence of posting images of oneself, after smartphones added front-facing cameras (2010) and Facebook acquired Instagram (2012), boosting its popularity. This greatly expanded the number of adolescents posting carefully curated photos and videos of their lives for their peers and strangers, not just to see, but to judge.

the four foundational harms of the new phone-based childhood that damage boys and girls of all ages: social deprivation, sleep deprivation, attention fragmentation, and addiction.

Social media therefore harmed the social lives even of students who stayed away from it. (My added context: students felt left out if they weren’t on a social media app)

Compared with boys, when girls go onto social media, they are subjected to more severe and constant judgments about their looks and their bodies, and they’re confronted with beauty standards that are further out of reach.

Free play

Children can only learn how to not get hurt in situations where it is possible to get hurt, such as wrestling with a friend, having a pretend sword fight, or negotiating with another child to enjoy a seesaw when a failed negotiation can lead to pain in one’s posterior, as well as embarrassment. When parents, teachers, and coaches get involved, it becomes less free, less playful, and less beneficial. Adults usually can’t stop themselves from directing and protecting.

A key feature of free play is that mistakes are generally not very costly. Everyone is clumsy at first, and everyone makes mistakes every day. Gradually, from trial and error, and with direct feedback from playmates elementary school students become ready to take on the greater social complexity of middle school. It’s not homework that gets them ready, nor is it classes on handling their emotions. Such adult-led lessons may provide useful information, but information doesn’t do much to shape a developing brain. Play does.

Experience, not information, is the key to emotional development. It is in unsupervised, child-led play where children best learn to tolerate bruises, handle their emotions, read other children’s emotions, take turns, resolve conflicts, and play fair. Children are intrinsically motivated to acquire these skills because they want to be included in the playgroup and keep the fun going.

The human brain contains two subsystems that put it into two common modes: discover mode (for approaching opportunities) and defend mode (for defending against threats). Young people born after 1995 are more likely to be stuck in defend mode, compared to those born earlier. They are on permanent alert for threats, rather than being hungry for new experiences. They are anxious.

Children are most likely to thrive when they have a play-based childhood in the real world. They are less likely to thrive when fearful parenting and a phone-based childhood deprive them of opportunities for growth

Maturity

If a child goes through puberty doing a lot of archery, or painting, or video games, or social media, the activities will cause lasting structural changes in the brain, especially if they are rewarding.

Natural sleep patterns shift during puberty. Teens start to go to bed later, but because their weekday mornings are dictated by school start times, they can’t sleep later. Rather, most teens just get less sleep than their brains and bodies need. This is a shame because sleep is vital for good performance in school and life, particularly during puberty, when the brain is rewiring itself even faster than it did in the years before puberty.

Friendships

All know that they will be chosen or passed over based in part on their appearance. But for adolescent girls, the stakes are higher because a girl’s social standing is usually more closely tied to her beauty and sex appeal than is the case for boys.

The happiest girls “aren’t the ones who have the most friendships but the ones who have strong, supportive friendships, even if that means having a single terrific friend.”

read more

Python AWS Lambda Create file in memory

If you need to create a file in a Lambda you need to write the file to /tmp because it is otherwise a read-only file system. But if you’re emailing a file there’s no need to write the file to the file system, with some minor alterations you can speed up the process and keep the file only in memory.

Current code

csv_file = 'your_file.csv'
with open(csv_file, 'w', newline='') as file:
    writer = csv.DictWriter(file, fieldnames=headers)
    writer.writeheader()
    for item in my_data:
        writer.writerow(item)

    return csv_file

New code

buffer = io.StringIO()
writer = csv.DictWriter(buffer, fieldnames=headers)
writer.writeheader()
for item in my_data:
    writer.writerow(item)

return buffer

and also a small tweak to your email script

Current code

attachment = MIMEBase('application', 'octet-stream')
attachment.set_payload(open(csv_file, 'rb').read())

New code

attachment = MIMEBase('application', 'octet-stream')
attachment.set_payload(csv_file.getvalue())

read more

Updating LlamaIndex to version 0.10

With the release of LlamaIndex v0.10 imports have changed from top level llama_index package to llama_index.core, llama_index.embeddings, and llama_index.llms

ServiceContext has also been deprecated and replaced with Settings. A concise version of existing code is below

from llama_index import ServiceContext
from llama_index.embeddings import AzureOpenAIEmbedding
from llama_index.evaluation import FaithfulnessEvaluator, RelevancyEvaluator
from llama_index.llms import AzureOpenAI

def evaluate_llama(dataset):
    llm = AzureOpenAI()
    embed_model = AzureOpenAIEmbedding()
    service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)

    faithfulness_gpt4 = FaithfulnessEvaluator(service_context=service_context)
    relevancy_gpt4 = RelevancyEvaluator(service_context=service_context)

    from llama_index.evaluation import BatchEvalRunner

Updated code removes creating and passing ServiceContext around with the new Settings object, which also reduces passing around llmb and embed_model as well. This part is all straightforward, but the migration tool does not take into account needing to add some new packages to requirements.txt

pip install llama_index_core llama-index-embeddings-azure-openai llama-index-llms-azure-openai

Once you’ve installed new packages, you should be able to update your imports. A concise version of the changes is listed below.

from llama_index.core import Settings
from llama_index.core.evaluation import FaithfulnessEvaluator, RelevancyEvaluator
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.llms.azure_openai import AzureOpenAI

def evaluate_llama(dataset):
    Settings.llm = AzureOpenAI()
    Settings.embed_model = AzureOpenAIEmbedding()

    faithfulness_gpt4 = FaithfulnessEvaluator()
    relevancy_gpt4 = RelevancyEvaluator()

    from llama_index.core.evaluation import BatchEvalRunner

read more

Squash all commits on a git branch

To squash all git commits on a branch you can run

git reset $(git merge-base master $(git branch --show-current))

There are other required steps, such as ensuring you’re up to date from main, but the gist if what you need is the singular command above

read more

Resolving glibc errors with python module

We recently switched out our lambda build image to a debian based image and started receiving errors around glibc.

[ERROR] Runtime.ImportModuleError. Unable to import module 'app':
/lib64/lib.so.6: version 'GLIBC_2.28' not found
(required by /var/task/cryptography/hazmat/bidnings/_rust.abi3.so)

After some googling we realized pip chooses the correct wheel for us and since we were running pip on a different machine than we were running our Python program on, we needed to let pip know about this.

RHEL/CentOS are using manylinux2014 which is what we need to pass to pip

--platform manylinux2014_x86_64

Additionally we do not want to use source packages, so we had to pass

 --only-binary=:all:

Our final command ended up being

python3 -m pip install --platform manylinux2014_x86_64 --only-binary=:all: -r requirements.txt

read more

Using Spring JPA with tables names with spaces, periods, and other special characters

Given a non-traditional table name, how do you get Spring JPA to recognize your @Entity properly?

If your table name has a period such as odd.table You use @Table(name="[odd].[table]")

If your table name has a slash such as odd/table You use @Table(name="[odd/table]")

If your table name has spaces such as table with spaces You use @Table(name="[table with spaces]")

TL;DR - [] are your friend.

read more

Converting a JSON file to a key and value list using jq

Given a JSON file named data.json

{
  "name": "Matt",
  "job": "Engineer"
}

You can output the keys and values using the following

jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' data.json > file.txt

file.txt contains

name=Matt
job=Engineer

You can upper case the key, by piping ascii_upcase to .key

jq -r 'to_entries|map("\(.key|ascii_upcase)=\(.value|tostring)")|.[]' data.json > file.txt

file.txt now contains

NAME=Matt
JOB=Engineer

You can also prepend text to the keys as well, here we’ll prepend WOW_ to each key

jq -r 'to_entries|map("WOW_\(.key|ascii_upcase)=\(.value|tostring)")|.[]' data.json > file.txt

file.txt now contains

WOW_NAME=Matt
WOW_JOB=Engineer

read more

External app config with React, Vite, AWS CloudFront, S3, and Secrets Manager

Putting secrets in your git repo is a no no, learn how to accomplish this using React, S3, and AWS Secrets Manager

Create a secret (you can also do this manually through the UI)

rSecret:
    Type: AWS::SecretsManager::Secret
    Properties:
        Description: Secrets used to create application configuration
        Name: !Sub '${pProduct}'
        SecretString: '{}'
        Tags:
        - Key: Environment
            Value: !Ref pEnvironment

Outputs:
  oSecrets:
    Value: !Ref rMyAppConfig

Add your secrets using Secrets Manager in your AWS account

Adding secrets to AWS Secrets Manager

Create a CodePipeline step to deploy your application to S3

- Name: Build_and_Deploy_To_S3
  ActionTypeId:
    Category: Build
    Owner: AWS
    Provider: CodeBuild
    Version: '1'
  Configuration:
    ProjectName: !Sub ${pProduct}-${pBusinessUnit}-S3Upload-${AWS::Region}
    # The env variables are necessary to retrieve the secret id, you can omit if you'd like to hard code it
    EnvironmentVariables: !Sub '[{"name":"S3_BUCKETS_ARTIFACT_VAR", "value":"CODEBUILD_SRC_DIR_${pBusinessUnit}S3", "type":"PLAINTEXT"}, {"name":"S3_BUCKETS_ARTIFACT_FILE", "value":"${pBusinessUnit}S3Buckets.json", "type":"PLAINTEXT"}]'
    PrimarySource: Source
  InputArtifacts:
    - Name: Source

Update your deploy to S3 step to pull in the secrets

profile=your-profile

npm ci
viteFilename=.env.production #.env.production is pulled in by default vite build, your name may vary depending on what vite build command you're running

# Pull the secret you created earlier from secrets manager and output as json file
appConfigSecret=$(jq .oSecrets ${!S3_BUCKETS_ARTIFACT_VAR}/${S3_BUCKETS_ARTIFACT_FILE} -r)
aws secretsmanager get-secret-value --secret-id ${appConfigSecret} --query SecretString --profile ${profile} | jq -r > secrets.json

# use jq to update your secrets from json to VITE_SECRET=secret-value
jq -r 'to_entries|map("VITE_\(.key|ascii_upcase)=\(.value|tostring)")|.[]' secrets.json > ${viteFilename}

# Run your build, it is very important you run your build after the secret is already on the file system, otherwise your application will not have access to the secrets
vite build

# copy your application files to s3
s3BucketPath=s3://your-bucket-path
aws s3 rm ${s3BucketPath} --recursive --profile ${profile} --quiet
aws s3 cp ./dist/ ${s3BucketPath} --recursive --sse AES256 --profile ${profile} --quiet

Finally, add a script to your package.json file to allow new developers to download the .env file from your S3 bucket to your local file system

"config": "npx path-exists-cli .env && echo 'exists' || aws s3 cp s3://insert-your-bucket-path/.env.production ./.env",

The beautiful thng about React plus vite is this file isn’t exposed on your file system anywhere

read more

Cache bust JavaScript, CSS or other file in Dockerfile

If you have an application without a build system, but need to cache bust a js file, this will do the trick

FROM nginx:1

COPY --chown=nginx:nginx html/ /usr/share/nginx/html

EXPOSE 8080

# Cache bust js file by appending date to scan.js file
RUN sed -i "s/scan.js/scan.js?a=$(date '+%Y%m%d%H%M')/g" /usr/share/nginx/html/index.html

ENTRYPOINT ["nginx", "-g", "daemon off;"]

read more

Unzip Docker image and contents

If you ever need to see the files inside a docker image, you can save the image locally and then unzip all the contents.

image_tag=repository:tag

docker save ${image_tag} > image.tar
tar xf image.tar
rm image.tar

for f in */; do
  if [ -d "${f}" ]; then
    cd "${f}" ||
        # unzip each of the layers
        find ./ -type f -name "*.tar" -exec tar xf "{}" \;
    cd ../
  fi
done

read more