You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Configuring a development environment for Bun can take 10-30 minutes depending on your internet connection and computer speed. You will need ~10GB of free disk space for the repository and build artifacts.
If you are using Windows, please refer to this guide
Using Nix (Alternative)
A Nix flake is provided as an alternative to manual dependency installation:
How we pwned X (Twitter), Vercel, Cursor, Discord, and hundreds of companies through a supply-chain attack
hi, i'm daniel. i'm a 16-year-old high school senior. in my free time, i hack billion dollar companies and build cool stuff.
about a month ago, a couple of friends and I found serious critical vulnerabilities on Mintlify, an AI documentation platform used by some of the top companies in the world.
i found a critical cross-site scripting vulnerability that, if abused, would let an attacker to inject malicious scripts into the documentation of numerous companies and steal credentials from users with a single link open.
tl;dr: If you want to just know the method, skip to How to section
Clangd is a state-of-the-art C/C++ LSP that can be used in every popular text editors like Neovim, Emacs or VS Code. Even CLion uses clangd under the hood. Unfortunately, clangd requires compile_commands.json to work, and the easiest way to painlessly generate it is to use CMake.
For simple projects you can try to use Bear - it will capture compile commands and generate compile_commands.json. Although I could never make it work in big projects with custom or complicated build systems.
But what if I tell you you can quickly hack your way around that, and generate compile_commands.json for any project, no matter how compilcated? I have used that way at work for years, originaly because I used CLion which supported only CMake projects - but now I use that method succesfully with clangd and Neovim.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Writing C software without the standard library [Linux Edition] - Franc[e]sco's Gopherspace
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This aims to be factual information about the size of large language models. None of this document was written by AI. I do not include any information from leaks or rumors. The focus of this document is on base models (the raw text continuation engines, not 'helpful chatbot/assistants'). This is a view from a few years ago to today of one very tiny fraction of the larger LLM story that's happening.
History
GPT-2,-medium,-large,-xl (2019): 137M, 380M, 812M, 1.61B. Source: openai-community/gpt2. Trained on the unreleased WebText dataset said to 40GB of Internet text - I estimate that to be roughly 10B tokens. You can see a list of the websites that went into that data set here domains.txt.
GPT-3 aka davinci, davinci-002 (2020): 175B parameters. There is a good breakdown of how those parameters are 'spent' here [How d
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters