TLDR
EA is a community where time tracking is already very common and yet most people I talk to don't because
1. It's too much work (when using toggl, clockify, ...)
2. It's not accurate enough (when using RescueTime, rize, ...)
I built https://donethat.ai that solves both of these with AI as part of AIM's Founding to Give program. Give it a try (and use discount code "EA" after the 14d trial to get another month free).
You should probably track your time
I'd argue that for most people, your time is your most valuable resource.[1] Even though your day has 24 hours, eight of those are already used up for sleep, another eight probably for social life, gym, food prep and eating, life admin, commute, leaving max eight hours to have impact.
Oliver Burkeman argues in his recent book Meditations for Mortals that eight is still too high - most high impact work gets done in four hours every day - the rest is just fluff and feeling busy.[2]
Now, how do you spend those four hours? When it comes to our other scarce resource - money - most people and companies keep budgets, there is a whole discipline of accounting to make sure it's spent wisely. But somehow, for time, we just eyeball it.
When tracking time, the objective isn't to set a number and play "number go up." The objective is to understand where you spend your time and help you prioritize and plan better. AI is estimated to increase workforce productivity by 5%.[3] Imagine the increase of productivity if everybody would be better at planning and prioritization.
One last reason that is often overlooked: Tracking time can reduce anxiety and guilt. We often feel like we should "do more" but there is always more to do. By setting realistic time-based goals like "work 4h/d on project X" we have a clear measure when we achieved the goal and have also full control over the outcome.
If you want to dive deeper than just these handwavy arguments into why it's useful, check out the LW post by Lynette, or the discussion
Lessons and updates
The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.
In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.
There are four categories of lessons and updates:
On the first two points, the post from Ben Todd is good, though I don’t agree with all of what he says. In my view, the most important lessons when it comes to the first two points, which also have bearing on the third and fourth, are:
On the third point — how to reduce the chance of future catastrophes — the key thing, in my view, is to pay attention to people’s local incentives when trying to predict their behaviour, in particular looking at the governance regime they are in. Some of my concrete lessons, here, are:
On how to respond better to crises in the future…. I think there’s a lot. I currently have no formal responsibilities over any community organisations, and do limited informal advising, too,[3] so I’ll primarily let Zach (once he’s back from vacation) or others comment in more depth on lessons learned from this, as well as changes that are being made, and planned to be made, across the EA community as a whole.
But one of the biggest lessons, for me, is decentralisation, and ensuring that people and organisations to a greater extent have clear separation in their roles and activities than they have had in the past. I wrote about this more here. (Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism. This is because there are gains from “public goods” in the movement that won't be provided by default, and because I think Zach is going to be a strong CEO who can plausibly pull it off.)
In my own case, at the point of time of the FTX collapse, I was:
But once FTX collapsed, these roles interfered with each other. In particular, being on the board of EV and an advisor to Future Fund majorly impacted my ability to defend EA in the aftermath of the collapse and to help the movement try to make sense of what had happened. In retrospect, I wish I’d started building up a larger board for EV (then CEA), and transitioned out of that role, as early as 2017 or 2018; this would have made the movement as a whole more robust.
Looking forward, I’m going to stay off boards for a while, and focus on research, writing and advocacy.
I give my high-level take on what generally follows from taking moral uncertainty seriously, here: “In general, and very roughly speaking, I believe that maximizing expected choice- worthiness under moral uncertainty entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race.”
There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.
More generally, I’m trying to emphasise that I am not the “leader” of the EA movement, and, indeed, that I don’t think that the EA movement is the sort of thing that should have a leader. I’m still in favour of EA having advocates (and, hopefully, very many advocates, including people who hopefully get a lot more well-known than I am), and I plan to continue to advocate for EA, but I see that as a very different role.
Fair! That's at least a super nonstandard example of an "opinion poll".