I attended EAGxBerkeley last weekend, and wanted to capture a few thoughts and ideas before my memory taints them any further.
For context,
- this was my first EA conference, making it hard for me to compare with other EA conferences,
- I don’t live in the Bay Area so part of my excitement might be an artifact of the novelty and being with friends, and,
- I’m a software engineer who uses and builds with AI on a regular basis but I’m not an AI safety researcher.
At a high-level, I would rate this among the most entertaining and worthwhile conferences I’ve attended; it was also hosted at one of the most unique venues I’ve been to, the Lighthaven. The event organizers and volunteers deserve another round of applause for all of their great work!
My goal for the conference was mostly to network and to learn from other’s how to best organize and run a local EA group, in my case EA Los Angeles. On both of these fronts, I would say the conference was a huge successful for me personally. Between the talks, the one-on-ones and the after-parties, I returned to my friend’s place each evening bubbling with energy and eager to “get to work”!
To the specifics. I’ve had a few days to organize my notes and wanted to share a few takeaways (in no particular order):
- While there has been impressive progress in the past couple of years, cultivated meat still appears many years away from going “mainstream”.
- I still believe supply-side interventions (real cultivated meat at a comparable price point) is the most promising path to ending or dramatically reducing factory farming.
- AI safety / security, as a field or subfield, will only continue to receive more and moreattention.ChatGPT 1o (Strawberry) was released today.
- My guess is that over half of the attendees were there solely, or at least primarily, for AI safety.
- This fields also seems to attract a lot of young, talented individuals, particularly from mathematics and computer-science backgrounds.
- Much of the AI safety research is still highly theoretical.
- This might be obvious (it is research after all) but part of me is eager to see a few AI safety orgs adopt a more engineering-focused approach. I think Vitalik Buterin’s defensive acceleration has a lot of merits.
- In many ways, this conference felt like two simultaneous conferences, an AI-safety conference and a smaller, animal-welfare conference sharing the same venue.
- One of my favorite talks was by Derek Shiller from Rethinking Priorities on “Bargaining Approach to Moral Uncertainty”.
- See previous bullet point about differences in priorities.
- The topic gives directionality.
- Check out The Moral Parliament Tool
- I don’t think I have fully digested the three previous bullets…
- I sat for the guided meditation on Sunday morning and was glad to see it on the schedule.
- Reduce suffering through practice?
This list far from comprehensive but hopefully highlights a few of the ideas and vibes that I picked up at the conferences. There were a lot talks going on at the conference I wasn’t able to attend but looking forward to catching when they are posted.
This conference was a wonderful showing for the community, and community is nothing more than people hanging out and enjoying each other’s company! It’s even more encouraging to be part of a community that is genuinely focused on making the world a better place.