{"id":145731,"date":"2025-10-21T00:07:50","date_gmt":"2025-10-20T22:07:50","guid":{"rendered":"https:\/\/www.pauljorion.com\/blog\/?page_id=145731"},"modified":"2025-10-21T00:07:50","modified_gmt":"2025-10-20T22:07:50","slug":"pribor-che-contextual-hyper-embedding-uint8","status":"publish","type":"page","link":"https:\/\/www.pauljorion.com\/blog\/pribor-che-contextual-hyper-embedding-uint8\/","title":{"rendered":"PRIBOR : <b>CHE (Contextual Hyper-Embedding uint8)<\/b>"},"content":{"rendered":"<p class=\"p1\"><strong>CHE (Contextual Hyper-Embedding <a href=\"https:\/\/www.pauljorion.com\/blog\/2025\/10\/03\/pribor-logique-combinatoire-magique-preuve-de-concept\/\" target=\"_blank\" rel=\"noopener\">uint8<\/a>)<\/strong> est <strong>plus \u00e9conomique<\/strong> que l\u2019attention classique des LLMs. Des processus similaires sont d\u00e9j\u00e0 utilis\u00e9s mais moins \u00e9conomiques que CHE.<\/p>\n<p class=\"p1\">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<h3 class=\"p1\">1. \u00c9conomie de m\u00e9moire<\/h3>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>Attention standard : matrices float16\/float32 \u2192 700 \u00e0 4000 bits par token<\/p>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 C<\/span>HE <a href=\"https:\/\/www.pauljorion.com\/blog\/2025\/10\/03\/pribor-logique-combinatoire-magique-preuve-de-concept\/\" target=\"_blank\" rel=\"noopener\">uint8 \u2192 8 bits par token<\/a><\/p>\n<p class=\"p1\">\u2192 gain \u00d7 500 \u00e0 \u00d7 5000 en m\u00e9moire<\/p>\n<p class=\"p1\">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<h3 class=\"p1\">2. Processus similaires d\u00e9j\u00e0 utilis\u00e9s<\/h3>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>INT-FlashAttention (Peking University, 2024) : attention enti\u00e8rement en INT8, <strong>72 % plus rapide<\/strong>, <strong>82 % moins d\u2019erreur<\/strong><span class=\"Apple-converted-space\">\u00a0 \u00a0<\/span><\/p>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>SageAttention (OpenReview, 2024) : attention en INT8 + lissage, <strong>plug-and-play<\/strong><span class=\"Apple-converted-space\">\u00a0 \u00a0<\/span><\/p>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>LLM.int8() (NeurIPS 2022) : multiplication matricielle enti\u00e8rement en INT8<span class=\"Apple-converted-space\">\u00a0 \u00a0<\/span><\/p>\n<p class=\"p1\">\u2192 uint8 est d\u00e9j\u00e0 standard dans l\u2019attention quantifi\u00e9e.<\/p>\n<p class=\"p1\">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<h3 class=\"p1\">3. Compatibilit\u00e9 avec CHE<\/h3>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 C<\/span>HE = uint8 comprim\u00e9 (SHA-256[0:8]) \u2192 8 bits par token<\/p>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>Pas de matrice 700\u00d7700, <strong>pas de softmax<\/strong>, <strong>pas de float<\/strong> ;<\/p>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>Juste un uint8 dans le triplet <span class=\"s1\">\u211d\u2074<\/span> ;<\/p>\n<p class=\"p1\">\u2192 Plus \u00e9conomique et d\u00e9j\u00e0 utilis\u00e9 dans l\u2019attention quantifi\u00e9e.<\/p>\n<p>Contact : <a href=\"mailto:pauljorion@pribor.ai\" target=\"_blank\" rel=\"noopener\">pauljorion@pribor.ai<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p class=\"p1\"><strong>CHE (Contextual Hyper-Embedding <a href=\"https:\/\/www.pauljorion.com\/blog\/2025\/10\/03\/pribor-logique-combinatoire-magique-preuve-de-concept\/\" target=\"_blank\" rel=\"noopener\">uint8<\/a>)<\/strong> est <strong>plus \u00e9conomique<\/strong> que l\u2019attention classique des LLMs. Des processus similaires sont d\u00e9j\u00e0 utilis\u00e9s mais moins \u00e9conomiques que CHE.<\/p>\n<p class=\"p1\">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<h3 class=\"p1\">1. \u00c9conomie de m\u00e9moire<\/h3>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 <\/span>Attention standard : matrices float16\/float32 \u2192 700 \u00e0 4000 bits par token<\/p>\n<p class=\"p1\">\u2022<span class=\"Apple-converted-space\">\u00a0 C<\/span>HE <a href=\"https:\/\/www.pauljorion.com\/blog\/2025\/10\/03\/pribor-logique-combinatoire-magique-preuve-de-concept\/\" [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-145731","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/pages\/145731","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/comments?post=145731"}],"version-history":[{"count":2,"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/pages\/145731\/revisions"}],"predecessor-version":[{"id":145733,"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/pages\/145731\/revisions\/145733"}],"wp:attachment":[{"href":"https:\/\/www.pauljorion.com\/blog\/wp-json\/wp\/v2\/media?parent=145731"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}