• Prominent machine learning figures want strict AI regulation – they want R&D budgets reallocated to AI safety & ethics, robust governance measures, AI firms held responsible for the harms their models cause, and a plan of safety measures to target specific risks. - AI Risk
• Many artists want to fight back against tech companies scraping their work to train AIs, and now they have a new weapon – Nightshade is a tool that “poisons” images, so when they’re used for training models they ruin the quality of their outputs. - MIT
• Advancing embodied AI to create robots that improve our daily lives – Habitat 3.0 is a virtual environment for exploring human-robot collaboration on daily tasks, and it’s been released alongside a rich 3D dataset and HomeRobot, a cheap hardware and software platform. - Meta
• Using ALL CAPS and saying “please” actually matters to ChatGPT – there’s pretty clear evidence that even these simple techniques can help you get better outputs from ChatGPT. - Ars Technica
• Check out this explanation of the convolution algorithm in neural networks, with no math and easy-to-understand visualizations, it’s a great way to peek inside AI models and see what they’re doing. - GitHub