Contents in this wiki are for entertainment purposes only
This is not fiction ∞ this is psience of mind

User contributions for XenoEngineer

Jump to navigation Jump to search
Search for contributionsExpandCollapse
⧼contribs-top⧽
⧼contribs-date⧽
(newest | oldest) View ( | ) (20 | 50 | 100 | 250 | 500)

13 August 2024

12 August 2024

  • 12:5112:51, 12 August 2024 diff hist +16,272 N Talk:Lost In SpaceCreated page with "<pre style="margin-left:3em; font:normal 14px terminal;"> curl "https://api.groq.com/openai/v1/chat/completions" \ -X POST \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${GROQ_API_KEY}" \ -d '{ "messages": [ { "role": "user", "content": "Question: Do you think you could to arithmetic addition like humans do on paper and pencil, by adding the numbers right to left, and carry a tens-digit to the lef..." current
  • 12:2912:29, 12 August 2024 diff hist +13,861 N Lost In SpaceCreated page with "Category:LLM Category:Llama3 8B Category:Llama <pre style="margin-left:3em; font:normal 14px terminal;">Playground Studio Chat llama3-8b-8192 View code SYSTEM Enter system message (Optional) USER Question: Do you think you could to arithmetic addition like humans do on paper and pencil, by adding the numbers right to left, and carry a tens-digit to the left? ASSISTANT What a fascinating question! While I'm not a human, I can indeed simulate arithmetic ad..." current

11 August 2024

10 August 2024

9 August 2024

  • 11:5111:51, 9 August 2024 diff hist +34 N Template:MenuLMPrimerCreated page with "Category:Language Model Primer" current
  • 11:5011:50, 9 August 2024 diff hist +1,913 N Principles of Sinusoidal PositioningCreated page with "{{menuLMPrimer}} <div style="background-color:azure; border:1px outset azure; padding:0 20px; max-width:860px; margin:0 auto; "> ===Principles of Sinusoidal Positioning=== ====Periodicity, Frequency, Orthogonality==== In natural language processing (NLP) and deep learning, positional encoding is a technique used to incorporate information about the position of each token in a sequence into a vector representation. This allows the model to capture sequential relationship..." current
  • 11:1511:15, 9 August 2024 diff hist +11,409 N Llama attention extensionCreated page with "Category:Llama3.1 Category:AI Category:Python __NOTOC__ <div style="background-color:azure; border:1px outset azure; padding:0 20px; max-width:860px; margin:0 auto; "> = Programmatic Extension of Infini-Attention Feature = == Modifying Positional Encoding == To extend the context-length of LLama 3.1's inherent infini-attention mechanism, start by modifying its positional encoding vector.   A good place to begin is by increasing the frequency of the s..."

7 August 2024

5 August 2024

(newest | oldest) View ( | ) (20 | 50 | 100 | 250 | 500)