We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"@xenova/transformers": "^2.17.2",
using this on Windows, Microsoft Edge, Vite.
I converted this model using the script mentioned in the docs: jinaai/reader-lm-1.5b
Then, I renamed the quantized model to decoder_model_merged_quantized.onnx
decoder_model_merged_quantized.onnx
The conversion is here Bewinxed/reader-lm-1.5b-onnx
I tried remote but for speed, I put the model in /public/models
The progress updates for loading, and the RAM increases, so the model is being loaded.
But after a bit of wait, I get this in the console:
ort-wasm.js:25 Unknown Exception lt @ ort-wasm.js:25 P @ ort-wasm.js:44 $func11504 @ ort-wasm-simd.wasm:0x82c2bc $func2149 @ ort-wasm-simd.wasm:0x16396e $func584 @ ort-wasm-simd.wasm:0x48a63 $func11427 @ ort-wasm-simd.wasm:0x829582 $func4164 @ ort-wasm-simd.wasm:0x339b6f $func4160 @ ort-wasm-simd.wasm:0x339aff j @ ort-wasm.js:56 $func356 @ ort-wasm-simd.wasm:0x2e215 j @ ort-wasm.js:56 $func339 @ ort-wasm-simd.wasm:0x28e06 $Ra @ ort-wasm-simd.wasm:0x6ebffb e2._OrtCreateSession @ ort-wasm.js:48 e.createSessionFinalize @ wasm-core-impl.ts:53 e.createSession @ wasm-core-impl.ts:99 e.createSession @ proxy-wrapper.ts:187 loadModel @ session-handler.ts:65 await in loadModel createSessionHandler @ backend-wasm.ts:48 create @ inference-session-impl.ts:189 await in create constructSession @ models.js:126 wasm-core-impl.ts:55 Uncaught (in promise) Error: Can't create a session at e.createSessionFinalize (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13232:119) at e.createSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13250:46) at e.createSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13034:17) at e.OnnxruntimeWebAssemblySessionHandler.loadModel (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13110:98) at async Object.createSessionHandler (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:5140:20) at async _InferenceSession.create (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:725:25) at async constructSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:21756:12) at async Promise.all (index 1) at async Qwen2ForCausalLM.from_pretrained (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:22130:14) at async AutoModelForCausalLM.from_pretrained (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:24837:14) e.createSessionFinalize @ wasm-core-impl.ts:55 e.createSession @ wasm-core-impl.ts:99 e.createSession @ proxy-wrapper.ts:187 loadModel @ session-handler.ts:65
Here is my worker script:
import { pipeline, env } from '@xenova/transformers'; env.localModelPath = "/models/"; env.allowLocalModels = true; env.useBrowserCache = false; env.allowRemoteModels = false; env.backends.onnx.wasm.proxy = true; class ReaderLM { static task = 'text-generation'; static model = 'Bewinxed/reader-lm-1.5b-onnx'; /* @type {import('@xenova/transformers').Pipeline} */ static instance = null; static async getInstance(progress_callback = null) { if (ReaderLM.instance === null) { ReaderLM.instance = pipeline(ReaderLM.task, ReaderLM.model, { progress_callback, }); } return ReaderLM.instance; } } self.addEventListener('message', async (event) => { let readerlm = await ReaderLM.getInstance((x) => { self.postMessage(x); }); let output = await readerlm([{ role: 'user', content: event.data.text, }], { callback_function: (x) => { self.postMessage({ status: 'update', output: readerlm.tokenizer.decode(x[0].output_token_ids, { skip_special_tokens: true }), }); }, }); self.postMessage({ status: 'complete', output: output, }); }); console.debug("ReaderLM worker loaded");
worker.postMessage({ text: markup, })
THANK YOU!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
System Info
"@xenova/transformers": "^2.17.2",
using this on Windows, Microsoft Edge, Vite.
Environment/Platform
Description
I converted this model using the script mentioned in the docs: jinaai/reader-lm-1.5b
Then, I renamed the quantized model to
decoder_model_merged_quantized.onnx
The conversion is here Bewinxed/reader-lm-1.5b-onnx
I tried remote but for speed, I put the model in /public/models
The progress updates for loading, and the RAM increases, so the model is being loaded.
But after a bit of wait, I get this in the console:
Here is my worker script:
Reproduction
THANK YOU!
The text was updated successfully, but these errors were encountered: