{"id":2287,"date":"2025-10-18T20:11:44","date_gmt":"2025-10-18T18:11:44","guid":{"rendered":"https:\/\/www.pauljorion.com\/blog_en\/?p=2287"},"modified":"2025-10-19T14:21:09","modified_gmt":"2025-10-19T12:21:09","slug":"pribor-che-contextual-hyper-embedding-uint8","status":"publish","type":"post","link":"https:\/\/www.pauljorion.com\/blog_en\/2025\/10\/18\/pribor-che-contextual-hyper-embedding-uint8\/","title":{"rendered":"<b>PRIBOR: CHE \u2014 Contextual Hyper-Embedding (uint8)<\/b>"},"content":{"rendered":"<h3 data-start=\"300\" data-end=\"362\"><strong data-start=\"304\" data-end=\"360\">A more economical alternative to classical attention<\/strong><\/h3>\n<p data-start=\"363\" data-end=\"601\">CHE (Contextual Hyper-Embedding, uint8) offers a radical gain in efficiency compared with the standard attention mechanisms of large language models. Similar methods have appeared in recent research, but none reach CHE\u2019s level of economy.<\/p>\n<hr data-start=\"603\" data-end=\"606\" \/>\n<h3 data-start=\"608\" data-end=\"639\"><strong data-start=\"612\" data-end=\"637\">1 \u00b7 Memory efficiency<\/strong><\/h3>\n<ul data-start=\"640\" data-end=\"812\">\n<li data-start=\"640\" data-end=\"723\">\n<p data-start=\"642\" data-end=\"723\"><strong data-start=\"642\" data-end=\"665\">Standard attention:<\/strong> float16 \/ float32 matrices \u2192 700 to 4000 bits per token<\/p>\n<\/li>\n<li data-start=\"724\" data-end=\"812\">\n<p data-start=\"726\" data-end=\"812\"><strong data-start=\"726\" data-end=\"742\">CHE (uint8):<\/strong> 8 bits per token<br data-start=\"759\" data-end=\"762\" \/>\u27a1 A \u00d7500 to \u00d75000 reduction in memory footprint.<\/p>\n<\/li>\n<\/ul>\n<hr data-start=\"814\" data-end=\"817\" \/>\n<h3 data-start=\"819\" data-end=\"854\"><strong data-start=\"823\" data-end=\"852\">2 \u00b7 Comparable approaches<\/strong><\/h3>\n<p data-start=\"855\" data-end=\"962\">Several projects already explore integer-based attention, confirming that the paradigm shift is underway:<\/p>\n<ul data-start=\"964\" data-end=\"1229\">\n<li data-start=\"964\" data-end=\"1069\">\n<p data-start=\"966\" data-end=\"1069\"><strong data-start=\"966\" data-end=\"1015\">INT-FlashAttention (Peking University, 2024):<\/strong> full INT8 attention \u2014 72 % faster, 82 % less error.<\/p>\n<\/li>\n<li data-start=\"1070\" data-end=\"1154\">\n<p data-start=\"1072\" data-end=\"1154\"><strong data-start=\"1072\" data-end=\"1109\">SageAttention (OpenReview, 2024):<\/strong> INT8 attention + smoothing, plug-and-play.<\/p>\n<\/li>\n<li data-start=\"1155\" data-end=\"1229\">\n<p data-start=\"1157\" data-end=\"1229\"><strong data-start=\"1157\" data-end=\"1187\">LLM.int8() (NeurIPS 2022):<\/strong> matrix multiplication entirely in INT8.<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"1231\" data-end=\"1322\">In other words, <strong data-start=\"1247\" data-end=\"1295\">uint8 quantization is already the new normal<\/strong> for efficient attention.<\/p>\n<blockquote data-start=\"1324\" data-end=\"1499\">\n<p data-start=\"1326\" data-end=\"1499\">\ud83d\udd17 <a class=\"decorated-link\" href=\"https:\/\/www.pauljorion.com\/blog\/2025\/10\/03\/pribor-logique-combinatoire-magique-preuve-de-concept\/\" target=\"_new\" rel=\"noopener\" data-start=\"1329\" data-end=\"1499\">Proof of concept \u2013 Combinatorial Magic Logic (Paul Jorion Blog, 2025)<\/a><\/p>\n<\/blockquote>\n<hr data-start=\"1501\" data-end=\"1504\" \/>\n<h3 data-start=\"1506\" data-end=\"1547\"><strong data-start=\"1510\" data-end=\"1545\">3 \u00b7 CHE\u2019s distinctive principle<\/strong><\/h3>\n<p data-start=\"1548\" data-end=\"1665\">CHE compresses each token into a <strong data-start=\"1581\" data-end=\"1597\">single uint8<\/strong> value \u2014 the truncated form (SHA-256 [0 : 8]) within a \u211d\u2074 triplet.<\/p>\n<ul data-start=\"1666\" data-end=\"1739\">\n<li data-start=\"1666\" data-end=\"1688\">\n<p data-start=\"1668\" data-end=\"1688\">No 700\u00d7700 matrix.<\/p>\n<\/li>\n<li data-start=\"1689\" data-end=\"1704\">\n<p data-start=\"1691\" data-end=\"1704\">No softmax.<\/p>\n<\/li>\n<li data-start=\"1705\" data-end=\"1739\">\n<p data-start=\"1707\" data-end=\"1739\">No floating-point computation.<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"1741\" data-end=\"1881\">Just a compact integer-based representation: <strong data-start=\"1786\" data-end=\"1830\">lighter, faster, and natively compatible<\/strong> with existing quantized-attention architectures.<\/p>\n<hr data-start=\"1883\" data-end=\"1886\" \/>\n<p data-start=\"1888\" data-end=\"1952\"><strong data-start=\"1888\" data-end=\"1900\">Contact:<\/strong> <a class=\"decorated-link\" href=\"mailto:pauljorion@pribor.ai\" rel=\"noopener\" data-start=\"1901\" data-end=\"1952\">pauljorion@pribor.ai<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<h3 data-start=\"300\" data-end=\"362\"><strong data-start=\"304\" data-end=\"360\">A more economical alternative to classical attention<\/strong><\/h3>\n<p data-start=\"363\" data-end=\"601\">CHE (Contextual Hyper-Embedding, uint8) offers a radical gain in efficiency compared with the standard attention mechanisms of large language models. Similar methods have appeared in recent research, but none reach CHE\u2019s level of economy.<\/p>\n<hr data-start=\"603\" data-end=\"606\" \/>\n<h3 data-start=\"608\" data-end=\"639\"><strong data-start=\"612\" [&hellip;]<\/h3>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","footnotes":""},"categories":[3,12],"tags":[321,562,559,353,356],"class_list":["post-2287","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-human-complex-systems","tag-artificial-intelligence","tag-che-contextual-hyper-embedding","tag-combinatorial-magic","tag-large-language-models","tag-llm"],"_links":{"self":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2287","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/comments?post=2287"}],"version-history":[{"count":2,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2287\/revisions"}],"predecessor-version":[{"id":2289,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2287\/revisions\/2289"}],"wp:attachment":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/media?parent=2287"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/categories?post=2287"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/tags?post=2287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}