Links - Seth Godin Severe weather alert

Like a lot of Seth Godin posts, this one is short but impactful, Severe Weather Alert

He discusses getting an alert every day about severe weather in his area, but now it’s just “weather”

We think that regularly alerting people to something is likely to get their attention again and again.

The more an application cries wolf, the more likely we are to ignore it

read more

Amazon Bedrock Updates for November 2024

Reducing hallucinations in large language models with custom intervention using Amazon Bedrock Agents

  • Amazon Bedrock Guardrails offers hallucination detection and grounding checks (existing functionality)
  • You can develop a custom hallucination score using RAGAS to reject generated responses (requires SNS)

Amazon Bedrock Flows is now generally available

  • Prompt Flows are named to Amazon Bedrock Flows (Microsoft also uses the name Prompt Flows)
  • You can now filter harmful content using Prompt node and Knowledge base node
  • Improved traceability - you can now quickly debug workflows with traceability of input and outputs

Prompt Optimization on Amazon Bedrock

  • This is exactly what it sounds like, you provide a prompt and your prompt is optimized for use with a specific model, which can result in significant improvements for Gen AI tasks

read more

Sharing - RIP to RPA - The Rise of Intelligent Automation

Notes from this article by Andreesen Horowitz RIP to RPA: The Rise of Intelligent Automation

Traditionally, robotics process automation (RPA), was a hard coded “bot” that mimicked exact key strokes necessary to complete a task. With LLMs, however, the original vision of RPA is now possible. An AI agent can be prompted with an end goal, e.g. book a flight from DSM to ORD on these dates, and will have the correct agents available to complete the task.

There is a large opportunity for startups in this space, because no existing product meets the original vision of RPA. There are two main areas:

horizontal AI enablers that execute a specific function for a broad range of industries, and vertical automation solutions that build end-to-end workflows tailored to specific industries.

read more

Future of Business - Palo Alto Networks’ Nikesh Arora on Managing Risk in the Age of AI

I really enjoyed this podcast with Nikesh Arora, CEO of Palo Alto Networks, where he discussed how much of their strategy is tied to acquisition vs trying to build everything themselves in house. He had some other insights, that probably aren’t revolutionary, but I appreciated his openness in this interview.

Here are my key takeaways:

The AI Revolution and Cybersecurity

  • With practically everything being internet connected, the potential points of vulnerability for cyberattacks are enormous
  • Bad actors are increasingly using AI to infiltrate systems, which requires companies Palo Alto Networks use AI to counter their attacks
  • He sees AI as a productivity tool that will augment human work, taking over repetitive tasks and allowing employees to focus on more enjoyable tasks

Acquisition and Integration

  • Palo Alto is acquiring innovative cybersecurity companies to stay ahead of threats
  • He stresses the importance of empowering the acquired teams and providing them with resources

Concerns and Risks

  • He discusses the importance of a zero trust security model, treating every user and device with the same level of scrutiny
  • They talk about the (obvious) potential for GenAI to be used maliciously
  • Arora anticipates regulations focusing on transparency, guardrails, and control of critical processes
  • He strongly emphasizes the importance of collaboration between industry and regulators.

read more

Book Notes - Software Engineering at Google

Software Engineering at Google

  • Programming is certainly a significant part of software engineering: after all, programming is how you generate new software in the first place. If you accept this distinction, it also becomes clear that we might need to delineate between programming tasks (development) and software engineering tasks (development, modification, maintenance).

  • distinction is at the core of what we call sustainability for software. Your project is sustainable if, for the expected life span of your software, you are capable of reacting to whatever valuable change comes along, for either technical or business reasons.
  • Importantly, we are looking only for capability-you might choose not to perform a given upgrade, either for lack of value or other priorities.² When you are fundamentally incapable of reacting to a change in underlying technology or product direction, you’re placing a high-risk bet on the hope that such a change never becomes critical

  • Team organization, project composition, and the policies and practices of a software project all dominate this aspect of software engineering complexity. These problems are inherent to scale: as the organization grows and its projects expand, does it become more efficient at producing software?

  • 2012, we tried to put a stop to this with rules mitigating churn: infrastructure teams must do the work to move their internal users to new versions themselves or do the update in place, in backward-compatible fashion. This policy, which we’ve called the “Churn Rule,” scales better: dependent projects are no longer spending progressively greater effort just to keep up. We’ve also learned that having a dedicated group of experts execute the change scales better than asking for more maintenance effort from every user: experts spend some time learning the whole problem in depth and then apply that expertise to every subproblem. Forcing users to respond to churn means that every affected team does a worse job ramping up, solves their immediate problem, and then throws away that now useless knowledge. Expertise scales better.

  • The more frequently you change your infrastructure, the easier it becomes to do so.

  • We have found that most of the time, when code is updated as part of something like a compiler upgrade, it becomes less brittle and easier to upgrade in the future. In an ecosystem in which most code has gone through several upgrades, it stops depending on the nuances of the underlying implementation; instead, it depends on the actual abstraction guaranteed by the language or OS. Regardless of what exactly you are upgrading, expect the first upgrade for a codebase to be significantly more expensive than later upgrades, even controlling for other factors.

  • We believe strongly in data informing decisions, but we recognize that the data will change over time, and new data may present itself. This means, inherently, that decisions will need to be revisited from time to time over the life span of the system in question. For long-lived projects, it’s often critical to have the ability to change directions after an initial decision is made. And, importantly, it means that the deciders need to have the right to admit mistakes. Contrary to some people’s instincts, leaders who admit mistakes are more respected, not less.

  • Programming is the immediate act of producing code. Software engineering is the set of policies, practices, and tools that are necessary to make that code useful for as long as it needs to be used and allowing collaboration across a team.

  • Software engineering” differs from “programming” in dimensionality: programming is about producing code. Software engineering extends that to include the maintenance of that code for its useful life span

  • Software is sustainable when, for the expected life span of the code, we are capable of responding to changes in dependencies, technology, or product requirements. We may choose to not change things, but we need to be capable.

  • Being data driven is a good start, but in reality, most decisions are based on a mix of data, assumption, precedent, and argument. It’s best when objective data makes up the majority of those inputs, but it can rarely be all of them.

  • Software development is a team endeavor. And to succeed on an engineering team-or in any other creative collaboration-you need to reorganize your behaviors around the core principles of humility, respect, and trust.

  • It turns out that this Genius Myth is just another manifestation of our insecurity.

  • Many programmers are afraid to share work they’ve only just started because it means peers will see their mistakes and know the author of the code is not a genius.

  • The current DevOps philosophy toward tech productivity is explicit about these sorts of goals: get feedback as early as possible, test as early as possible and think about security and production environments as early as possible. This is all bundled into the idea of “shifting left” in the developer workflow; the earlier we find a problem, the cheaper it is to fix it.

  • A good postmortem should include the following:
  1. A brief summary of the event
  2. A timeline of the event, from discovery through investigation to resolution
  3. The primary cause of the event
  4. Impact and damage assessment
  5. A set of action items (with owners) to fix the problem immediately
  6. A set of action items to prevent the event from happening again
  7. Lessons learned
  • Admitting that you’ve made a mistake or you’re simply out of your league can increase your status over the long run. In fact, the willingness to express vulnerability is an outward show of humility, it demonstrates accountability and the willingness to take responsibility, and it’s a signal that you trust others’ opinions. In return, people end up respecting your honesty and strength. Sometimes, the best thing you can do is just say, “I don’t know”

read more

Book Notes - Architecture Modernization

Architecture Modernization

  • Rather than treating teams as feature factories (feeding them predefined solutions), giving them outcomes to achieve and the freedom to discover solutions in their subdomains unlocks more of their creative talents.

  • As renowned product management expert and author Marty Cagan says, “If you’re just using your engineers to code, you’re only getting about half their value…. The best single source for innovation is your engineers”

  • Internal developer platforms (IDPs) are another crucial component to achieving IVSS. IDPs reduce the friction of building, deploying, and supporting code by providing an exceptional developer experience (DX) through the use of concepts like paved roads/golden paths. This allows stream-aligned teams to focus on business outcomes without being bogged down by extraneous infrastructure-related activities and development friction.

  • Modernization isn’t just rewriting the old system with new technologies. It’s an opportunity to completely rethink the UX, product functionality, business processes, and domain model and remove unneeded complexity.

  • The importance of ongoing learning and upskilling cannot be overstated. It is perhaps the most important part of architecture modernization. If teams don’t have the time and opportunity to learn and practice modern concepts, there is a serious risk that new architecture will be designed with the old ways of thinking, and many of its flaws will be carried across.

  • Architecture modernization is about converting dated architecture, which is a business liability, into modern architecture that provides a competitive advantage.

  • Modernization requires short-term compromises for long-term prosperity, which is why leaders are reluctant to commit, but this creates a negative spiral where the architecture becomes even more of a liability.

  • As more of the world is run by software, systems will become more complex, and architecture will become even more important.

  • Modernization does not happen without learning new skills and acting differently. Leaders need to be aware that a significant investment is necessary for supporting every employee involved in modernization to ensure they have the required skills. If not, modernization may take far longer, or the new architecture may look just like the old one or even worse. Learning and upskilling (the topic of chapter 17) is not a onetime workshop or training course; it’s an ongoing financial and time investment. Learning needs to be built into the organization’s culture.

read more

Book Notes - Open Talent by John Winsor

Open Talent by John Winsor

  • The post-Covid era has been an even bigger black swan, one that calls into question all our assumptions about how the world is supposed to work. According to Taleb, black swan events contain three elements:

    • The element of surprise, which catches everyone off guard
    • Impacts and outcomes that are substantial, with potentially global repercussions
    • The appearance of inevitability after the fact, given all the relevant signals and data
  • Leaders who want to change their own minds may want to consider an important reframing of recent events. The millions of dissatisfied employees who are joining the Great Resignation aren’t rejecting work.
  • They are rejecting jobs that pay them less than they feel they are worth and that constrain their creativity and stifle their potential. They are looking for ways to do more while doing better for themselves. The organizations that have been experimenting with open talent strategies and outside in innovation are capturing them and, as a result, continuing to push forward and succeed.

  • How to change your mind.

    • Disruption offers new opportunities for grand change, especially since the supply of talent has already made the shift. Companies must begin to adapt to talent because talent is no longer adapting to corporations. Platforms fundamentally change how workers engage with their employers and vice versa, shifting the power structure so that talent has the edge. By removing the idea of a physical workspace, open talent is no longer hampered by office politics and can thrive, focused on the things that motivate people the most.
  • Companies that can move quickly enough to profit from change don’t bog themselves down with a ton of bureaucracy when they set out to innovate. They don’t micromanage the process from the top down, and they don’t insist on owning all the means they employ to succeed. Their teams are light on their feet, have a sense of ownership over outcomes, and are empowered to make their own decisions and recruit all the help they need.

What Are the Goals of Your Center of Excellence?

  • A COE’s overarching goal should be to take full advantage of all the talent opportunities that are available. Every organization is unique, but we suggest you start with these three principles:

    • Understand your readiness. While many people inside your organization may be using open talent already, there can be, as we’ve seen, many blind spots and roadblocks when you try to institutionalize it. The better you understand your readiness, the better your chances of a successful adoption.
    • Be prepared; things evolve rapidly. Understand that the ecosystem of open talent is quickly evolving, just as technology is. With the emergence of VDI (virtual desktop infrastructure), there has been a profusion of solutions to compliance concerns and security issues. These solutions dramatically remove the friction of adoption. A COE is necessary to stay on top of these changes and be ready to implement those that best address the needs of the organization.
    • Fit your digital transformation to your talent strategy. In many companies, talent acquisition is siloed between HR, procurement, and innovation. The COE should digitally transform all these silos by using internal and external platforms.
  • Remember that open talent is about removing friction. It frees your people from the bureaucratic encumbrances that prevent them from moving faster than your competitors, from getting help from outsiders when they need it, and from capturing and applying the wisdom of crowds to the toughest problems. Companies that use those capabilities are like boxers, bobbing and weaving through punches, agilely sidestepping disruptions, and continually floating and testing new ideas and approaches.
  • Accordingly, when developing your open talent strategy, keep in mind the following elements: responsiveness, agility, speed, efficiency, commitment, and staying emergent. These are attributes as opposed to goals; think of them as benchmarks to guide your decisionmaking.

  • Assess. Before you can figure out where you’re going, you must know where you are. By asking the right questions, the COE leads the assessment process by helping leaders understand the organizational changes that are needed to develop open talent solutions.

  • Learn. This phase aims to use education, culture, and communications to create a coalition of the curious and willing. It’s easy, especially for talent innovators, to race ahead quickly with new ideas or models. But such speed only works when a single entrepreneur is leading the charge or needs convincing. When the proposed changes are companywide, people need to understand what they are getting into and why. The COE lays the groundwork for the program through workshops and formal and informal communications.

  • When companies encourage and support mobility within the organization, employees feel more valued and empowered to develop their skills continually-and therefore more likely to stay on the job.

  • A company can lose good open talent candidates if it has rigid onboarding processes, especially those that take twelve weeks or longer. If at all possible, rearchitect your internal procedures such as background checks, drug testing, and IT security protocols to cut down on time–or allow the platform to carry out some of this due diligence.

  • The biggest opportunity for you when you start your open talent journey is to tap into your current team to help you solve problems and do the tasks you need. There is always a combination of cognitive surplus and people inside your organization looking for mobility with upskilling and project work.

  • Our research shows that, using platforms, it typically takes four days to hire a freelancer to do the work you need instead of the average two plus months to find the right talent in this talent-constrained environment through traditional methods. We also find that the freelance talent you hire on a platform is typically 30 percent less expensive and 40 percent more productive than the internal employee performing the same tasks. You see this massive gain in productivity because you’re hiring freelancers to do a task, not to play a role in the company. You are paying them to do the work you want, not to spend time at corporate meetings, office gatherings, and the like.

  • External talent clouds for strategic advantages, cost savings, and flexibility. The ETC is a term we use for this growing practice; it’s an efficient and cost-effective way that companies can obtain the specialized talents they need from outside sources without sacrificing quality or productivity.
  • How does an ETC work? The key to success in this phase rests on strong and trusting relationships with the right platform partners. Our recommendation is to take a two-fold approach. Revisit how you managed your full-time staff and determine which of these roles are better suited for freelance workers. The goal here should be to become agile yet to remain focused on the indispensability of certain positions for the company’s operations to run smoothly.

  • Based on our analysis of research conducted by Michael Menietti and Karim Lakhani on Topcoder contest submissions across a range of software development projects, we’ve found that you don’t need hundreds of submissions to obtain that extreme value. You only need about twenty-two entrants.

  • developing innovation strategies, firms must consider whether to make or buy them. In general, companies do not rely on any one strategy exclusively. One study found that 72 percent of firms that describe themselves as innovative actually rely on both making strategies and buying them. But there’s a third possible approach. Innovation contests add a fresh perspective to the make-or-buy discussion.

  • Since 2018, when the platform Unilever uses-flex-work-was launched, the company has unlocked half a million hours of employee engagement and has seen a 41 percent improvement in overall productivity. Flex-work offers other benefits as well, such as helping employees whose jobs are in jeopardy because of automation to seek out opportunities for reskilling.

  • flex-work platform was developed by Gloat and is used by over a hundred companies globally. Its founder, Ben Reuveni, got the idea for the platform when he was working at IBM and realized that it would be easier for him to find a job at a different company than it was to take his career in a new direction at IBM

  • So, they did something much simpler. Sharma started an Excel spreadsheet with projects that his team needed help with and distributed it in a weekly email throughout the technology organization. Lo and behold, those projects were suddenly getting done-one hundred of them over two years, with a success rate of 95 percent. All that it required was getting the word out within SEI, says Sharma: “Now, managers start by looking for talent through a simple internal talent mechanism before they talk about going externally. And our talent is really engaged, as they have learned to use these projects as a way to learn and expand their career opportunities.”

  • Break down silos. Traditional organizations have rigid talent hierarchies in which employees’ skills are hoarded. This structure prevents the overall workforce from achieving its full potential. ITMs (Internal talent marketplace) break down silos and allow for crossfertilization.
  • Improve engagement and retention. If people in your workforce don’t see a future with your organization, they’re going to find somewhere else to build one. ITMs empower employees to move laterally and vertically and to learn new skills and seek out new opportunities that align with their passion, skills, and ambition. Empowering your employees in this way helps your organization meet its goals.

  • Encourage experiential learning. Traditional development efforts fall short because they fail to include experiential learning (such as offering ways to learn by doing, rather than just by reading or listening to someone else). These efforts also ignore the employee’s acquired knowledge and ability to apply new skills in a real-time setting. ITMs create a unified workforce system that can act as a single source for skills management

  • ITMs are not merely the next evolution of HR technology. They transform organizations by offering employees who have a cognitive sur plus the opportunity to work on projects they desire. In effect, these marketplaces also help employees with upskilling and reskilling while allowing them to maintain a sense of innovation and opportunities for professional growth. The ITM is best used to retain employees and can be quite powerful when combined with remote work and skill-building opportunities. For the enterprise, unlocking agility, breaking down silos and capturing the cognitive surplus are just a few benefits.

read more

Book Notes - GitHub Actions in Action

GitHub Actions in Action

Generating an SBOM using the Microsoft SBOM tool

name: Generate SBOM
  run: |
    curl -Lo $RUNNER_TEMP/sbom-tool https://github.com/microsoft/sbomtool/releases/latest/download/sbom-tool-linux-x64
    chmod +x $RUNNER_TEMP/sbom-tool
    $RUNNER_TEMP/sbom-tool generate -b ./buildOutput -bc . -pn Test -pv 1.0.0 -ps mycompany -nsb https://sbom.mycompany.com -V Verbose

Job summaries

Here is an example that adds Markdown and plain HTML to the job summary:
  - run: echo '### Hello world! :rocket:' >> $GITHUB_STEP_SUMMARY
  - run: echo '### Love this feature! :medal_sports:' >> $GITHUB_STEP_SUMMARY
  - run: echo '<h1>Great feature!</h1>' >> $GITHUB_STEP_SUMMARY

Built-in functions in GitHub for expressions

toJSON()
fromJSON()
hashFiles()
contains(search, item)
startsWith()
endsWith()
format() // replaces values in string

Functions to check status of workflow job

success()
always()
cancelled()
failure()

Chaining workflow jobs

job_1:

job_2:
  needs: job_1

read more

Book Notes - Generative Deep Learning by David Foster

Generative Deep Learning by David Foster

  • A generative model must also be probabilistic rather than deterministic, because we want to be able to sample many different variations of the output, rather than get the same output every time.

  • A generative model must include a random component that influences the individual samples generated by the model.

Representation Learning

  • Suppose you wanted to describe your appearance to someone who was looking for you in a crowd of people and didn’t know what you looked like. You wouldn’t start by stating the color of pixel 1 of a photo of you, then pixel 2, then pixel 3, etc. Instead, you would make the reasonable assumption that the other person has a general idea of what an average human looks like, then amend this baseline with features that describe groups of pixels, such as I have very blond hair or I wear glasses. With no more than 10 or so of these statements, the person would be able to map the description back into pixels to generate an image of you in their head.

If you use word tokens:

  • All text can be converted to lowercase, to ensure capitalized words at the start of sentences are tokenized the same way as the same words appearing in the middle of a sentence. In some cases, however, this may not be desirable; for example, some proper nouns, such as names or places, may benefit from remaining capitalized so that they are tokenized independently

read more

Book Notes - Generative AI in Action

Generative AI in Action

Not a lot of notes because a lot was things I’ve already learned. Would be a great resource for anyone new to the space.

In the past, we would need to use a named entity recognition (NER) model for entity extraction; furthermore, that model would need to have seen the data and be trained as part of its dataset. With LLM models, we can do this without any training, and they are more accurate. While traditional NER methods are effective, they often require manual effort and domain-specific customization. LLMs have significantly reduced this burden, offering a more efficient and often more accurate approach to NER across various domains. A key reason is the Transformer architecture, which we will cover in the next few chapters. This is a great example of traditional AI being more rigid and less flexible than generative AI.

Counting tokens for GPT

import tiktoken as tk

def count_tokens(string: str, encoding_name: str) -> int:
  # Get the encoding
  encoding = tk.get_encoding(encoding_name)

  # Encode the string
  encoded_string = encoding.encode(string)

  # Count the number of tokens
  num_tokens = len(encoded_string)

  return num_tokens

# Define the input string
prompt = I have a white dog nam

# Display the number of tokens in the String
print(Number of tokens: , count_tokens(prompt, cl100k_base))
# Running this code, as expected, gives us the following output:
# python countingtokens.py
# Number of tokens: 7

read more