Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput

:::info
Authors:
(1) Keivan Alizadeh;
(2) Iman Mirzadeh, Major Contribution;
(3) Dmitry Belenko, Major Contribution;
(4) S. Karen Khatamifard;
(5) Minsik Cho;
(6) Carlo C Del Mundo;
(7) Mohammad Rastegari;
(8) Mehrdad Farajtabar.
:::
Table of Links
Abs…


This content originally appeared on HackerNoon and was authored by Knapsack

:::info Authors:

(1) Keivan Alizadeh;

(2) Iman Mirzadeh, Major Contribution;

(3) Dmitry Belenko, Major Contribution;

(4) S. Karen Khatamifard;

(5) Minsik Cho;

(6) Carlo C Del Mundo;

(7) Mohammad Rastegari;

(8) Mehrdad Farajtabar.

:::

Abstract and 1. Introduction

2. Flash Memory & LLM Inference and 2.1 Bandwidth and Energy Constraints

2.2 Read Throughput

3 Load From Flash

3.1 Reducing Data Transfer

3.2 Improving Transfer Throughput with Increased Chunk Sizes

3.3 Optimized Data Management in DRAM

4 Results

4.1 Results for OPT 6.7B Model

4.2 Results for Falcon 7B Model

5 Related Works

6 Conclusion and Discussion, Acknowledgements and References

3.2 Improving Transfer Throughput with Increased Chunk Sizes

To increase data throughput from flash memory, it is crucial to read data in larger chunks, preferably sized as the multiples of the block size of the underlying storage pool. In this section, we detail the strategy we have employed to augment the chunk sizes for more efficient flash memory reads.

\

\ Bundling Based on Co-activation. We had a conjecture that neurons may be highly correlated

\ Figure 5: By bundling columns of up project and rows of down project in OPT 6.7B we will load 2x chunks instead of reading columns or rows separately.

\ in their activity patterns, which may enable further bundling. To verify this we calculated the activations of neurons over C4 validation dataset. For each neuron the coactivation of that neuron with other ones forms a power law distribution as depicted in Figure 6a. Now, let’s call the neuron that coactivates with a neuron the most closest friend. Indeed, the closest friend of each neuron coactivates with it very often. As Figure 6b demonstrates, it is interesting to see each neuron and its closest friend coactivate with each other at least 95% of the times. The graphs for the 4th closest friend and 8th closest friend are also drawn. Based on this information we decided to put a bundle of each neuron and its closest friend in the flash memory; whenever a neuron is predicted to be active we’ll bring its closes friend too. Unfortunately, this resulted in loading highly active neurons multiple times and the bundling worked against our original

\ Figure 6: (a) For a randomly selected neuron from the 10th layer of OPT 6.7B, there exist a group of neurons which are coactivated with high probability (b) The closest friend of a neuron is defined as the most coactivated neuron in the same layer, and the closet friend of every neuron in OPT 6.7B almost always get coactivated. (c) The 3rd closest friend gets coactivatd with each neuron 86% of the time in average (d) The 7th closest friend seems to be less relevant and doesn’t coactivate with the neuron very often.

\ intention. It means, the neurons that are very active are ‘closest friend‘ of almost everyone. We intentionally present this negative result, as we believe it may lead to interesting future research studies on how to effectively bundle the neurons and how to leverage it for efficient inference.

\

:::info This paper is available on arxiv under CC BY-SA 4.0 DEED license.

:::

\


This content originally appeared on HackerNoon and was authored by Knapsack


Print Share Comment Cite Upload Translate Updates
APA

Knapsack | Sciencx (2024-07-31T17:00:14+00:00) Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput. Retrieved from https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/

MLA
" » Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput." Knapsack | Sciencx - Wednesday July 31, 2024, https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/
HARVARD
Knapsack | Sciencx Wednesday July 31, 2024 » Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput., viewed ,<https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/>
VANCOUVER
Knapsack | Sciencx - » Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/
CHICAGO
" » Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput." Knapsack | Sciencx - Accessed . https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/
IEEE
" » Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput." Knapsack | Sciencx [Online]. Available: https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/. [Accessed: ]
rf:citation
» Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput | Knapsack | Sciencx | https://www.scien.cx/2024/07/31/large-language-models-on-memory-constrained-devices-using-flash-memory-improving-throughput/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.