Using Large Language Models for Zero-Shot Video Generation: A VideoPoet Case Study Post date January 11, 2025 Post author By Teleplay Technology Post categories In ai-video-training, decoder-only-architecture, generative-video-models, llms, multimodal-inputs, video-generation-ai, video-generation-using-ai, videopoet
Training and Testing Data Formats for AnLLM Models Post date October 11, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Anchor-based Large Language Models: More Experimental Results Post date October 11, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Practical LLMs for Real-World Applications Post date October 11, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Anchor-based Large Language Models: Analysis Post date October 11, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Benchmarking AnLLMs: Insights from OpenBookQA to BoolQ Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Pre-Training AnLLMs: Leveraging RedPajama Data for Enhanced Performance Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Anchor-based Large Language Models: Experiments and Implementation Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Improving Real-Time Inference with Anchor Tokens Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
The Role of Anchor Tokens in Self-Attention Networks Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
Unlocking the Mechanics of Decoder-Only Transformers and Self-Attention Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture
How Anchor Tokens Transform Sequence Information Compression in LLMs Post date October 10, 2024 Post author By Anchoring Post categories In anchor-based-llms, anchor-self-attention-network, anllms, decoder-only-architecture, gpu-memory-optimization, in-context-learning, natural-language-modeling, transformer-architecture