Weekly Round-Up - August 6, 2023
Articles, podcasts, and videos that give me cool insights into data and problem solving.
Insightful Content
Use Feature Analytics for Better Products
Pre-ordering Timo’s book has been on my to-do list for a little bit. That being said, I think this blog pushed it up as a high priority for my August book budget. I absolutely love the way that Timo has disassembled the atomic elements of a product and then thought about how to track the major activities/changes. This has always felt like a “I know it when I see it” kind of intuition for me, so to see some diagrams helps me visualize this kind of exercise better!
I also like to define how a feature has been used successfully and have a list of feature success events, which is helpful for many things.
This is a great starting point for thinking about event tracking! It’s easy to describe and gives everybody a good starting model to glom onto.
The Invention of Time
https://www.historytoday.com/archive/history-matters/invention-time
This is a really cool historical piece on Joseph Priestly, the dude that invented the timeline. It’s a really interesting story about how Priestly combined philosophy, analytical thinking, and history to create a visual model representing time. This has so much in common with my thought process behind data models and data products — how do I make an abstract idea into something discrete?
Priestley determined to create a chart of his own that readers could scan ‘at one view’. He made several innovations but one proved key: lines, inspired by his philosophy of time. For this, Priestley drew on a seemingly unconnected topic: John Locke’s 1690 account of abstract ideas.
Priestley’s timeline was so new that he published a small book alongside it, explaining its underlying principles. A Description of a Chart of Biography sought to justify representing times using lines:
As no image can be formed of abstract ideas, they are, of necessity, represented in our minds by particular, but variable ideas … THUS the abstract idea of TIME … admits of a natural and easy representation in our minds by the idea of a measurable space, and particularly that of a line; which, like time, may be extended in length, without giving any idea of breadth or thickness.
AI and the Future of Work: What Stays 100% Human
https://kozyrkov.medium.com/ai-and-the-future-of-work-what-stays-100-human-9007b3d1aeaa
Having just discovered Cassie Kozyrkov’s blog, I am excited to go down a rabbit hole here. This article is a gateway to a whole bunch of related articles on problem solving, thinking, and what makes us human.
Cassie breaks down the difference between thinking and thunking (defined below) and AI’s unique positioning to reduce thunking in order to allow us time to think.
While AI can automate thunking tasks, it can’t think for us. Thunking is the term I use to describe those tasks that are repetitive, predictable, and don’t require a high level of cognitive engagement, creativity, or critical thinking. These are the tasks that you can do almost on autopilot once you’ve figured out what needs to be done.
We have an unprecedented opportunity to leverage AI as a tool to free up more time for the thinking tasks that drive innovation and progress.
Thunking is more measurable than thinking which is abstract, fickle, and quite resistant to reliable operationalization. Thus, we fear that the automation of thunking will make it harder for us to justify the thinking we are actually supposed to be doing.
GPT is Dumber than 1990s AI
https://blog.metamirror.io/gpt-is-dumber-than-a-1990s-ai-3f068df8b036
I’m a fan of spicy takes. This article’s 🌶️🌶️🌶️ title caused me to immediately gravitate towards it.
If you set out with a test that an AI is good at, like asking Deep Blue to play chess, or GPT to take a test where rote learning is effective, then guess what… you’re going to prove that it is good at what it is good at.
The best way to get GPT to be good at maths? Get it to say “Oh crap, I’m bad at maths, I’ll had this over to Wolfram Alpha”, the same would be true with Chess, build a Stockfish plug-in so it plays a phenomenal game of chess. This doesn’t mean that GPT is now able to do maths or chess, just that the system has been designed to choose the right tool for the right job.
I’ve explained LLMs to people as just being a parrot that wants a cracker. It makes the sounds that it thinks you want to hear in response to its input so that it gets rewarded. It doesn’t actually know what it’s saying. As such, I love this description of AI as a tool instead of as the 80s scifi movie villain that some people really want it to be. LLMs in particular will be helpful for bridging contextual gaps for humans quickly and efficiently, but they’ll never have any idea what they actually mean. It’s the wrong tool for that.
As a sidenote, a while back somebody on the /r/anarchychess subreddit created a game played between a chess AI and ChatGPT. Eventually, ChatGPT just started making haphazard, bizarre, and frankly illegal moves because it had no idea what it was actually doing.
The Next Evolution of Neighborhood Nexus
https://www.linkedin.com/pulse/next-evolution-neighborhood-nexus-here-neighborhoodnexus/
Tommy Pearce has been a long-time colleague of mine on the Emory University Quantitative Theory & Methods Advisory Board for the past few years. His non-profit, Neighborhood Nexus, has announced some organizational major changes this week!
Congratulations to Tommy and the entire Neighborhood Nexus team. :)
Glue Work
https://locallyoptimistic.com/post/glue-work/
I found this while doing research for a LinkedIn post I wanted to write about doing Glue Work. I’ve frequently argued in the past that Analytics is 80% glue work due to the unstructured nature of the field vs something like software engineering, and Caitlin Moorman agrees with me.
Engineering is a well-defined discipline, with established norms and practices – whereas in analytics we’re in the midst of a period of transition where the dominant tools, processes, and expectations are shifting. We’re adopting some of the best practices of engineering, but it’s a work in progress. Most of us don’t have… a product manager to sort through competing stakeholder requests and determine priorities – we work closely with our stakeholders and do that ourselves. Our teams often include folks with a pretty wide range of technical skills. There is a lot of variation in the work we do, and fewer clear rules on progression.
ATLytiCS Data for Hope Competition
https://atlytics.org/2023-data-for-hope-competition-understanding-homelessness-in-atlanta/
ATLytics is holding a weeklong virtual hackathon focused on understanding homelessness in Atlanta and developing insights to inspire hope for tomorrow. First place prize is $2,000 and the final award will be presented at the Southern Data Science Conference (see below).
For my non-Atlantan readers, at least half of the team needs to be Atlanta-based.
Target Tuition Virtual Hackathon
https://portfolio.diceytech.co.uk/project-opportunity/1689252533079x915080713393668100
For people looking to break into the Data space, Dicey Tech is hosting a hackathon ending September 3rd. First price is 500 pounds and an internship. I’m unclear if you need to be UK-based to participate, but it looks like fun!
Atlanta Data Events
Atlanta Tableau User Group Networking Night (August 17, 2023)
INFORMS Regional Analytics Conference - Atlanta (August 25, 2023)
Southern Data Science Conference (September 5, 2023)



