Description
Description
I'm running a local instance of LlamaSharp 0.20.0 with the same version of the Cpu backend, both of which are the latest in Nuget.
On my dev machine it works fine, but on a production server (same Windows Server 2022 OS, same CPU i7 12700, same memory 32gb) it breaks while initialising the Llama backend:
Note that it has got past the llama_empty_call() line successfully.
The exception text is:
System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'
C# code calling in is breaking on the first ModelParams reference:
string modelPath = @"c:\llamamodels\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa"; // this came with ollama
var parameters = new ModelParams(modelPath)
{
ContextSize = 1024,
GpuLayerCount = 0
};
Its quite hard to understand why it works on my dev machine and not on a similar production box. I made sure both are running the same version of .net 8 but this looks like its in the c++ anyway.
Here's the call stack placed in the console after the error:
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
at LLama.Native.NativeApi.llama_backend_init()
--------------------------------
at LLama.Native.NativeApi..cctor()
at LLama.Native.NativeApi.llama_max_devices()
at LLama.Abstractions.TensorSplitsCollection..ctor()
at LLama.Common.ModelParams..ctor(System.String)
at LlamaTest.Program+<AI1>d__5.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
at LlamaTest.Program.AI1()
at LlamaTest.Program+<Main>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
at LlamaTest.Program.Main(System.String[])
at LlamaTest.Program.<Main>(System.String[])
and in the event log there are two separate entries:
Faulting application name: LlamaCS2.exe, version: 1.0.0.0, time stamp: 0x66960000
Faulting module name: coreclr.dll, version: 8.0.824.36612, time stamp: 0x6696b815
Exception code: 0xc0000005
Fault offset: 0x00000000001c2090
Faulting process id: 0x1964
Faulting application start time: 0x01db71af873801b1
Faulting application path: C:\xAssetsAI\TestDebug\LlamaCS2.exe
Faulting module path: C:\Program Files\dotnet\shared\Microsoft.NETCore.App\8.0.8\coreclr.dll
Report Id: b863577b-2875-4c2c-8a8f-d4a92b77c78c
Faulting package full name:
Faulting package-relative application ID:
and
Application: LlamaCS2.exe
CoreCLR Version: 8.0.824.36612
.NET Version: 8.0.8
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Stack:
at LLama.Native.NativeApi.llama_backend_init()
at LLama.Native.NativeApi.llama_backend_init()
at LLama.Native.NativeApi..cctor()
at LLama.Native.NativeApi.llama_max_devices()
at LLama.Abstractions.TensorSplitsCollection..ctor()
at LLama.Common.ModelParams..ctor(System.String)
at LlamaTest.Program+<AI1>d__5.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
at LlamaTest.Program.AI1()
at LlamaTest.Program+<Main>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
at LlamaTest.Program.Main(System.String[])
at LlamaTest.Program.<Main>(System.String[])
Activity
plqplq commentedon Jan 28, 2025
Further info, if I downgrade the backend to LlamaSharp.Backend.Cpu version 0.19.0, then it does work on the failing machine
github-actions commentedon Apr 26, 2025
This issue has been automatically marked as stale due to inactivity. If no further activity occurs, it will be closed in 7 days.
martindevans commentedon Apr 26, 2025
Did newer versions resolve this?
github-actions commentedon Jul 12, 2025
This issue has been automatically marked as stale due to inactivity. If no further activity occurs, it will be closed in 7 days.