-
Notifications
You must be signed in to change notification settings - Fork 106
Run model from script #787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 52 commits
Commits
Show all changes
53 commits
Select commit
Hold shift + click to select a range
040043d
Introduce kill switch mechanism for onnxruntime sessions (not ready yet)
alonre24 0961e66
Putting login in onnx backend (not ready yet)
alonre24 9d5fbf8
WIP
alonre24 22662bd
Refactor background workers + add support to kill switch in onnx (for…
alonre24 c3a45e9
Refactor - do not use rax, extend onnxRunSessions array whenever a ne…
alonre24 2684852
Refactor backends loading
alonre24 cd9baa1
Start testing - not finished
alonre24 5c09106
Support bool type tensor
alonre24 5d3dd2c
Support tensors of type bool. Add validation that a input value doesn…
alonre24 fa14217
Merge branch 'Support_BOOL_type_for_tensors' into ONNX_kill_switch
alonre24 04dac08
Support tensor of type bool in ONNX, Add tests for kill switch
alonre24 1d6b3ed
Add load time config for ONNX_TIMEOUT. Parallel tests seems not to work.
alonre24 ea3c174
Some fixes
alonre24 05c2a39
Merge master (resolve conflicts in backends.c)
alonre24 4bbfbcd
Remove debug print
alonre24 cd2936c
Merge master with updated changes of supporting tensor of type bool
alonre24 4aed8ca
Some fixes and documentation complement.
alonre24 6cd9652
Refactor load time config
alonre24 42059b8
Remove redundant include
alonre24 6c906aa
Merge branch 'master' into ONNX_kill_switch
alonre24 23749c4
PR fixes part 1: refactor config and run queue info files (and all pl…
alonre24 342afbb
Merge branch 'ONNX_kill_switch' of https://github.com/RedisAI/RedisAI…
alonre24 697faf9
linter...
alonre24 ee02cc0
Merge branch 'master' into ONNX_kill_switch
alonre24 1201cb2
linter...
alonre24 4360679
Merge branch 'master' into ONNX_kill_switch
alonre24 21737e6
More PR fixes, add the option to get the global run sessions array fr…
alonre24 73f2a91
Minor fixes
alonre24 6babc11
Add the ability to execute model through torch script - WIP
alonre24 3942d23
More PR fixes, among that:
alonre24 e9fed4f
Merge branch 'master' into ONNX_kill_switch
alonre24 349653c
Fix tests for the case that we run on GPU - since CPU queue always cr…
alonre24 af423a7
Update readies
alonre24 8231c53
Update readies + minor fixes
alonre24 fd8c672
PR fixes
alonre24 78da23e
Return error if onnx is executed in a non async manner (via gears for…
alonre24 a67da55
Merge branch 'master' into run_model_from_script
alonre24 3545f7e
Merge branch 'ONNX_kill_switch' into run_model_from_script
alonre24 ab505ba
basic test passes - running torch model from torch script is enabled.
alonre24 c4d8b55
Extend tests to include onnx and tf as well.
alonre24 fedc508
Fix device id - always use -1 when creating tensors for default CPU.
alonre24 dd0d797
Resolve conflicts after merging master
alonre24 56f815f
Remove test added for debug
alonre24 fbee927
Remove debug additions and add comments and documentation.
alonre24 d57e598
Change device id of default CPU to -1 in RDB loading as well.
alonre24 525acc4
Fix and test error raising when a redis torch script operation fails.
alonre24 e1b3745
Some PR fixes.
alonre24 4869acf
Update device_id to -1 in older rdb versions
alonre24 f02e0b5
Merge branch 'master' into run_model_from_script
alonre24 d47d04a
- Move ownership on the output tensor to torch instead of copying it.
alonre24 9d6b409
Merge branch 'run_model_from_script' of https://github.com/RedisAI/Re…
alonre24 8a7cac9
Added comment
alonre24 10988d3
Merge branch 'master' into run_model_from_script
alonre24 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,70 @@ | ||
| #pragma once | ||
|
|
||
| #include <stdint.h> | ||
| #include "redismodule.h" | ||
|
|
||
| #ifdef BACKENDS_API_EXTERN | ||
| #define BACKENDS_API extern | ||
| #endif | ||
|
|
||
| #ifndef BACKENDS_API | ||
| #define BACKENDS_API | ||
| #endif | ||
|
|
||
| typedef struct RAI_Tensor RAI_Tensor; | ||
| typedef struct RAI_Model RAI_Model; | ||
| typedef struct RAI_ModelRunCtx RAI_ModelRunCtx; | ||
| typedef struct RAI_Error RAI_Error; | ||
|
|
||
| /** | ||
| * @return The internal id of RedisAI current working thread. | ||
| * id range is {0, ..., <threads_count>-1}. If this is called from a non | ||
| * RedisAI BG thread, return -1. | ||
| */ | ||
| BACKENDS_API long (*RedisAI_GetThreadId)(void); | ||
|
|
||
| /** | ||
| * @return The number of working threads in RedisAI. This number should be | ||
| * equal to the number of threads per queue (load time config) * number of devices | ||
| * registered in RedisAI (a new device is registered if a model is set to run on | ||
| * this device in AI.MODELSTORE command. | ||
| */ | ||
| BACKENDS_API uintptr_t (*RedisAI_GetThreadsCount)(void); | ||
|
|
||
| /** | ||
| * @return The number of working threads per device queue (load time config). | ||
| */ | ||
| BACKENDS_API long long (*RedisAI_GetNumThreadsPerQueue)(void); | ||
|
|
||
| /** | ||
| * @return The maximal number of milliseconds that a model run session should run | ||
| * before it is terminated forcefully (load time config). | ||
| * Currently supported only fo onnxruntime backend. | ||
| */ | ||
| BACKENDS_API long long (*RedisAI_GetModelExecutionTimeout)(void); | ||
|
|
||
| /** | ||
| * The following functions are part of RedisAI low level API (the full low level | ||
| * API is defined in redisai.h). For every function below named "RedisAI_X", its | ||
| * implementation can be found under the name "RAI_X" in RedisAI header files. | ||
| */ | ||
|
|
||
| BACKENDS_API int (*RedisAI_InitError)(RAI_Error **err); | ||
| BACKENDS_API void (*RedisAI_FreeError)(RAI_Error *err); | ||
| BACKENDS_API const char *(*RedisAI_GetError)(RAI_Error *err); | ||
|
|
||
| BACKENDS_API RAI_Tensor *(*RedisAI_TensorCreateFromDLTensor)(DLManagedTensor *dl_tensor); | ||
| BACKENDS_API DLTensor *(*RedisAI_TensorGetDLTensor)(RAI_Tensor *tensor); | ||
| BACKENDS_API RAI_Tensor *(*RedisAI_TensorGetShallowCopy)(RAI_Tensor *t); | ||
| BACKENDS_API void (*RedisAI_TensorFree)(RAI_Tensor *tensor); | ||
|
|
||
| BACKENDS_API RAI_ModelRunCtx *(*RedisAI_ModelRunCtxCreate)(RAI_Model *model); | ||
| BACKENDS_API int (*RedisAI_GetModelFromKeyspace)(RedisModuleCtx *ctx, RedisModuleString *keyName, | ||
| RAI_Model **model, int mode, RAI_Error *err); | ||
| BACKENDS_API int (*RedisAI_ModelRunCtxAddInput)(RAI_ModelRunCtx *mctx, const char *inputName, | ||
| RAI_Tensor *inputTensor); | ||
| BACKENDS_API int (*RedisAI_ModelRunCtxAddOutput)(RAI_ModelRunCtx *mctx, const char *outputName); | ||
| BACKENDS_API size_t (*RedisAI_ModelRunCtxNumOutputs)(RAI_ModelRunCtx *mctx); | ||
| BACKENDS_API RAI_Tensor *(*RedisAI_ModelRunCtxOutputTensor)(RAI_ModelRunCtx *mctx, size_t index); | ||
| BACKENDS_API void (*RedisAI_ModelRunCtxFree)(RAI_ModelRunCtx *mctx); | ||
| BACKENDS_API int (*RedisAI_ModelRun)(RAI_ModelRunCtx **mctx, long long n, RAI_Error *err); |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.