-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Added initial design doc for cross platform #5628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a few comments, need to look more at solution detail.
docs/code/CrossPlatform.md
Outdated
In order to fully test everything we need to, we would also need to change how we test to use the Helix testing servers. Currently, Helix doesn't have the capability to test Apple's new M1 code, but that is in the works. | ||
|
||
### Mobile Support | ||
.NET Core 6 will allow us to run nativly on mobile. Since we are making these changes before .NET 6 is released, I propose we don't include that work as of yet. As long as we handle the native binaries correctly and make sure ML.NET provides descriptive error methods, we should be able to have mobile support as soon as .NET Core 6 releases for everything that currently has a software fallback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's important to know what we intend to do here. Do we think we'll be able to make all our native components work on these new platforms? If not, wouldn't that change how we're thinking about which components should get a managed fallback?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated a bit, but does mobile have issues with loading native modules?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WASM doesn't today. iOS and Android do, but would require explicit targeting and not all dependencies may be present. There are also potentially code execution constraints on these platforms (iOS -> no runtime codegen).
### Support Grid | ||
This is what I propose for the support grid. Since .NET Core 2.1 is end-of-life this year, I am putting much less emphasis on it. Since .NET Core 5 will be out of support before .NET Core 3.1 will be, I am putting less CI emphasis on .NET Core 5. | ||
|
||
| Platform | Architecture | Intel MKL | .NET Framework | .NET Core 2.1 | .NET Core 3.1 | .NET Core 5 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which versions and distributions of the below are we going to support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets sync on this tomorrow.
.NET Core 6 will allow us to run nativly on mobile. Since we are making these changes before .NET 6 is released, I propose we don't include that work as of yet. As long as we handle the native binaries correctly and make sure ML.NET provides descriptive error methods, we should be able to have mobile support as soon as .NET Core 6 releases for everything that currently has a software fallback. | ||
|
||
### Support Grid | ||
This is what I propose for the support grid. Since .NET Core 2.1 is end-of-life this year, I am putting much less emphasis on it. Since .NET Core 5 will be out of support before .NET Core 3.1 will be, I am putting less CI emphasis on .NET Core 5. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we split up the statements around "support" and "test" (perhaps build as well). Support is a statement we make to customers. Build and test is how we achieve that. I think it's reasonable to have a sparse matrix for testing where we focus on LTS releases, while still achieving coverage of our built assets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets sync on this tomorrow.
Codecov Report
@@ Coverage Diff @@
## main #5628 +/- ##
==========================================
- Coverage 74.45% 68.38% -6.07%
==========================================
Files 1072 1131 +59
Lines 195988 241019 +45031
Branches 21546 25024 +3478
==========================================
+ Hits 145921 164830 +18909
- Misses 44274 69708 +25434
- Partials 5793 6481 +688
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Co-authored-by: Stephen Toub <[email protected]>
### 2.5 3rd Party Dependencies | ||
As mentioned above, there are several 3rd party packages that don't have support for non x86/x64 machines. | ||
- LightGBM. LightGBM doesn't offer packages for non x86/x64. I was able to build the code for Arm64, but we would either have to build it ourselves, convince them to create more packages for us, or annotate that this doesn't work on non x86/64 machines. | ||
- TensorFlow. The full version of TensorFlow only runs on x86/x64. There is a [lite](https://www.tensorflow.org/lite/guide/build_arm64) version that supports Arm64, and you can install it directly with python, but this isn't the full version so not all models will run. We would also have to verify if the C# library we use to interface with TensorFlow will work with the lite version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When non-.NET apps do inference on Android or IPhone, do they use this lite Tensorflow - that would suggest it's sufficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For mobile, you can use either the TFLite API on the respective platforms or things like MLKit / Core ML. The minimum requirement though would be the TFLite API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that TFLite will work out of the gate with the bindings that we have. I haven't done a swap to test it for sure though, I'll do that tomorrow.
As mentioned above, there are several 3rd party packages that don't have support for non x86/x64 machines. | ||
- LightGBM. LightGBM doesn't offer packages for non x86/x64. I was able to build the code for Arm64, but we would either have to build it ourselves, convince them to create more packages for us, or annotate that this doesn't work on non x86/64 machines. | ||
- TensorFlow. The full version of TensorFlow only runs on x86/x64. There is a [lite](https://www.tensorflow.org/lite/guide/build_arm64) version that supports Arm64, and you can install it directly with python, but this isn't the full version so not all models will run. We would also have to verify if the C# library we use to interface with TensorFlow will work with the lite version. | ||
- ONNX Runtime. ONNX Runtime doesn't have prebuilt packages for more than just x86/x64. It does support Arm, but we have to build it ourselves or get the ONNX Runtime team to package Arm assemblies. This is the same situation as with LightGBM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we will need ONNX runtime and LightGBM for Arm* I suggest to reach out to them informally now to see whether this is something they'd contemplate, whether it's part of their roadmap already, etc, and you can note in this doc. The might be interested in reviewing the doc in general, also
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We spoke with ONNX Runtime about it and they basically said that since there was no customer asks for it on arm that it wasn't on their soon to do list. Hopefully when ML.NET officially supports arm we can get the data needed to make that ask.
I haven't spoken with LightGBM about it. Do you have a contact for them? If not I can find something.
docs/code/CrossPlatform.md
Outdated
|
||
ML.NET allows .NET developers to develop/train their own models and infuse custom machine learning into their applications using .NET. | ||
|
||
Currently, while .NET is able to run on many different platforms and architectures, ML.NET is only able to run on Windows, Mac, and Linux, either x86 or x64. This excludes many architectures such as Arm, Arm64, M1, and web assembly, places that .NET is currently able to run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say the most important platform is Arm64/Apple Silicon followed by WASM followed by Arm. So if something is going to be slow on Arm (after all we don't support intrinsics there), or possibly even missing, that shouldn't necessarily prevent us doing the right thing for Arm64.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I have seen and tested so far, arm vs arm64 should mostly take the same work to get both working. The main differences should mostly be with the intrinsics, if we add them. That may change as I get farther into the implementation, but that seems to be the case currently.
|
||
Currently, while .NET is able to run on many different platforms and architectures, ML.NET is only able to run on Windows, Mac, and Linux, either x86 or x64. This excludes many architectures such as Arm, Arm64, M1, and web assembly, places that .NET is currently able to run. | ||
|
||
The goal is to enable ML.NET to run everywhere that .NET itself is able to run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there secondary goals? Eg
- Do not regress ML.NET performance on platforms where it currently runs.
- Where possible, aim for convergence to a single implementation for all platforms. (These goals may be in conflict..)
- Where possible, avoid taking on ownership of code that is not core to ML.NET or does not align with our expertise. For example, we would probably not want to maintain a large math library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another:
4. Optimize for progressive delivery to reduce risk. ie., a solution where we expand to one platform at a time, or with partial functionality, release, and continue. No big bang effort that succeds or fails together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Where possible, prefer that dependencies be supplied with the platform (possibly a package manager) rather than shipped as part of ML.NET. (this is a weaker goal, as usability is important, so maybe it's a non goal)
I have lots more info about our dependency on x86/x64 in another document if required. | ||
|
||
### 3.1 Rewrite native components to work on other platforms | ||
This will still allow us to gain the benefits of the X86/x64 SIMD instructions and Intel MKL on architectures that support it but will also keep the benefits of native code in the other places. The downside is that we would have to build the native code for, potentially, a lot of different architectures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or convince upstream to build it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I was meaning the native ML.NET components we wrote ourselves. No upstream in this case sadly.
I was unable to find all the replacements we would need. We would end up having to write many native methods ourselves with this approach. | ||
|
||
### 3.2 Rewrite native components to be only managed code | ||
This will truly allow ML.NET to run anywhere that .NET runs. The only downside is the speed of execution, and the time to rewrite the existing native code we have. If we restrict new architectures to .NET core 3.1 or newer, we will have an easier time with the software fallbacks as some of this code has already been written. This solution will also require a lot of code rewrite from native code to managed code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from Arm, where we do not have intrinsics, I would not assume that the speed need be signfiicantly compromised. We have had good results, eg., moving parts of native code of CoreCLR into managed. However, it needs careful optimization, and sometimes JIT work. So I can't claim it's cheap - but it's no longer necessarily slower
I suppose if the native code uses special tricks like GPU then we can't compete.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good point. Let me rephrase that to better state that in the doc. It would only be slower if we didn't do all those steps you mentioned.
- SymSgdNative needs to be either re-written in managed code, or re-write 4 Intel MKL methods. The 4 methods are just dealing with vector manipulation and shouldn't be hard to do. | ||
|
||
## My Suggestion | ||
My suggestion would be to start with the hybrid approach. It will require the least amount of work to get ML.NET running elsewhere, while still being able to support a large majority of devices out of the gate. This solution will still limit the platforms we can run-on to what we build the native components for, initially Arm64 devices, but we can do a generic Arm64 compile so it should work for all Arm64 v8 devices. The goal is to eventually have a managed implementation which can work everywhere .NET does and accelerated components to increase performance where possible. This could include native components in some cases or hardware intrinsics in others. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good to me
My suggestion would be to start with the hybrid approach. It will require the least amount of work to get ML.NET running elsewhere, while still being able to support a large majority of devices out of the gate. This solution will still limit the platforms we can run-on to what we build the native components for, initially Arm64 devices, but we can do a generic Arm64 compile so it should work for all Arm64 v8 devices. The goal is to eventually have a managed implementation which can work everywhere .NET does and accelerated components to increase performance where possible. This could include native components in some cases or hardware intrinsics in others. | ||
|
||
### Improving managed fallback through intrinsics | ||
We should also target .NET 5 so that we gain access to the Arm64 intrinsics. Rather than implementing special-purpose native libraries to take advantage of architecture-specific instructions we should instead enhance performance ensuring our managed implementation leverage intrinsics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may well be OK if Arm* only works on .NET 5+, or it's slow on 3.1 only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was about to write up something similar in the Support Grid section.
If supporting only .NET Core 5+ saves us a lot of effort or complexity. Perhaps this would be a good time to bump the major version of ML.NET and reduce the number of .NET versions we support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We won't save a lot of effort/complexity just by going to .NET 5. We could gain execution speed if we switched to .NET 5+ and also implemented the intrinsics for it. It may be worth it to update the major version of ML.NET once we have all those hardware intrinsics in place for arm.
### 2.3 Managed Code | ||
Since ML.NET has a hard dependency on x86/x64, the managed code imports DLLs without checking whether or not they exist. If the DLLs don't exist you get a hard failure. For example, if certain columns are active, the `MulticlassClassificationScorer` will call `CalculateIntermediateVariablesNative` which is loaded from `CpuMathNative`, but all of this is done without any checks to see if the DLL actually exists. The tests also run into this problem, for instance, the base test class imports and sets up Intel MKL even if the test itself does not need it. | ||
|
||
### 2.4 Native Projects |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do any of these have transitive dependencies that might be a problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The native projects? I don't believe there are any other dependencies or issues then the one I have listed below. As I get further into the implementation that may change, but from the testing I have done so far that seems to be the case.
In order to fully test everything we need to, we would also need to change how we test to use the Helix testing servers. Currently, Helix doesn't have the capability to test Apple's new M1 code, but that is in the works. We will be building once for each architecture/platform combination, and then fan out and submit one job for each framework version we want to test. We will also be cross-targeting builds. For example we can build on normal linux to target arm linux and then run those tests using Helix. Its estimated to be about a medium amount of work to make the changes required to use Helix. | ||
|
||
### Mobile Support | ||
.NET Core 6 will allow us to run natively on mobile. Since we are making these changes before .NET 6 is released, I propose we don't include that work as of yet. As long as we handle the native binaries correctly and make sure ML.NET provides descriptive error methods, we should be able to have mobile support as soon as .NET Core 6 releases for everything that currently has a software fallback. Since the native projects we are proposing to keep build for Arm64, they should work on mobile as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the native projects we are proposing to keep build for Arm64, they should work on mobile as well.
This is possibly not the case depending on whether they have OS/platform dependencies. If the library doesn't mention that they support iOS/Android, this may be a problem. The dependencies may be libraries, API, or behaviors. this is actually true for Blazor WASM as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats good to know. I will need to look more into this. Thanks for the heads up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have native dependencies you will have to build them differently for mobile platforms. The arm64 aspect is only one angle, you need to deal with different libc, etc as well
.NET Core 6 will allow us to run natively on mobile. Since we are making these changes before .NET 6 is released, I propose we don't include that work as of yet. As long as we handle the native binaries correctly and make sure ML.NET provides descriptive error methods, we should be able to have mobile support as soon as .NET Core 6 releases for everything that currently has a software fallback. Since the native projects we are proposing to keep build for Arm64, they should work on mobile as well. | ||
|
||
### Support Grid | ||
This is what I propose for the support grid. Since .NET Core 2.1 is end-of-life this year, I am putting much less emphasis on it. Since .NET Core 5 will be out of support before .NET Core 3.1 will be, I am putting less CI emphasis on .NET Core 5. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above, I would do zero 2.1-specific coding effort. Maybe just cursory testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't heard that phrase before. Can you explain in more depth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Dan is just mentioning that we don't invest in making our .NETCoreApp2.1 builds work on more platforms, be faster, or have more test coverage. Since the platform is going out of support soon and it should be "easier" to do work on newer frameworks, then we should focus our efforts on the newer frameworks.
cc @marek-safar for thoughts about potential iOS/Android/Blazor specific problems. |
ML.NET has 6 native projects. They are: | ||
- CpuMathNative | ||
- Partial managed fallback when using NetCore 3.1. | ||
- A large amount of work would be required to port the native code to other platforms. We would have to change all the SIMD instructions for each platform. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be partially automatable. The C++ hardware intrinsics have a largely 1-to-1 mapping with the corresponding .NET hardware intrinsics, just with different names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there software we have that does that automation automatically? Or would I have to create that automation as well? Is this true for arm intrinsics as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Few comments / questions.
docs/code/CrossPlatform.md
Outdated
2. Native components must be explicitly built for additional architectures which we wish to support. This limits our ability to support new platforms without doing work. | ||
3. Building our native components for new platforms faces challenges due to lack of support for those components dependencies. This limits our ability to support the current set of platforms. | ||
4. Some of our external dependencies have limited support for .NET's supported platforms. | ||
5. Some things we use internally are optomized for x86/x64 and wont work well on other platforms. For example various components are parallelized but current webassembly targets are single threaded. It's likely some changes will be necessary to various algorithms to work well in these environments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5. Some things we use internally are optomized for x86/x64 and wont work well on other platforms. For example various components are parallelized but current webassembly targets are single threaded. It's likely some changes will be necessary to various algorithms to work well in these environments. | |
5. Some things we use internally are optomized for x86/x64 and won't work well on other platforms. For example various components are parallelized but current WebAssembly targets are single threaded. It's likely some changes will be necessary to various algorithms to work well in these environments. |
### 2.5 3rd Party Dependencies | ||
As mentioned above, there are several 3rd party packages that don't have support for non x86/x64 machines. | ||
- LightGBM. LightGBM doesn't offer packages for non x86/x64. I was able to build the code for Arm64, but we would either have to build it ourselves, convince them to create more packages for us, or annotate that this doesn't work on non x86/64 machines. | ||
- TensorFlow. The full version of TensorFlow only runs on x86/x64. There is a [lite](https://www.tensorflow.org/lite/guide/build_arm64) version that supports Arm64, and you can install it directly with python, but this isn't the full version so not all models will run. We would also have to verify if the C# library we use to interface with TensorFlow will work with the lite version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For mobile, you can use either the TFLite API on the respective platforms or things like MLKit / Core ML. The minimum requirement though would be the TFLite API.
I was unable to find all the replacements we would need. We would end up having to write many native methods ourselves with this approach. | ||
|
||
### 3.2 Rewrite native components to be only managed code | ||
This will truly allow ML.NET to run anywhere that .NET runs. The only downside is the speed of execution, and the time to rewrite the existing native code we have. If we restrict new architectures to .NET core 3.1 or newer, we will have an easier time with the software fallbacks as some of this code has already been written. This solution will also require a lot of code rewrite from native code to managed code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe with .NET Core 3.1 you have issues with WebAssembly because it looks for CpuMathNative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks for it but its not needed. Thats part of the build process itself. I will be fixing that as part of this process.
- SymSgdNative needs to be either re-written in managed code, or re-write 4 Intel MKL methods. The 4 methods are just dealing with vector manipulation and shouldn't be hard to do. | ||
|
||
## My Suggestion | ||
My suggestion would be to start with the hybrid approach. It will require the least amount of work to get ML.NET running elsewhere, while still being able to support a large majority of devices out of the gate. This solution will still limit the platforms we can run-on to what we build the native components for, initially Arm64 devices, but we can do a generic Arm64 compile so it should work for all Arm64 v8 devices. The goal is to eventually have a managed implementation which can work everywhere .NET does and accelerated components to increase performance where possible. This could include native components in some cases or hardware intrinsics in others. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
In order to fully test everything we need to, we would also need to change how we test to use the Helix testing servers. Currently, Helix doesn't have the capability to test Apple's new M1 code, but that is in the works. We will be building once for each architecture/platform combination, and then fan out and submit one job for each framework version we want to test. We will also be cross-targeting builds. For example we can build on normal linux to target arm linux and then run those tests using Helix. Its estimated to be about a medium amount of work to make the changes required to use Helix. | ||
|
||
### Mobile Support | ||
.NET Core 6 will allow us to run natively on mobile. Since we are making these changes before .NET 6 is released, I propose we don't include that work as of yet. As long as we handle the native binaries correctly and make sure ML.NET provides descriptive error methods, we should be able to have mobile support as soon as .NET Core 6 releases for everything that currently has a software fallback. Since the native projects we are proposing to keep build for Arm64, they should work on mobile as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make sure I understand correctly, by making the proposed changes, once .NET 6 is out ML.NET is expected to work out of the box on those platforms. The table below refers to CI efforts.
Co-authored-by: Dan Moseley <[email protected]>
Co-authored-by: Dan Moseley <[email protected]>
Co-authored-by: Dan Moseley <[email protected]>
Co-authored-by: Dan Moseley <[email protected]>
Co-authored-by: Dan Moseley <[email protected]>
4. Some of our external dependencies have limited support for .NET's supported platforms. | ||
5. Some things we use internally are optimized for x86/x64 and wont work well on other platforms. For example various components are parallelized but current web assembly targets are single threaded. It's likely some changes will be necessary to various algorithms to work well in these environments. | ||
|
||
### 2.1 Problems |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about other problems? Does it work on platforms with memory/cpu limitations? Does it use any APIs which is not universally available (e.g threads) ?
I think initially we should annotate that they don't work on non x86/x64 devices. This includes logging an error when they try and run an unsupported 3rd party dependency, and then failing gracefully with a helpful and descriptive error. The user should be able to compile the 3rd party dependency, for the ones that support it, and have ML.NET still be able to pick it up and run it if it exists. ONNX Runtime is something that we will probably want, but we can look more into this as we get requests for it in the future. | ||
|
||
### Helix | ||
In order to fully test everything we need to, we would also need to change how we test to use the Helix testing servers. Currently, Helix doesn't have the capability to test Apple's new M1 code, but that is in the works. We will be building once for each architecture/platform combination, and then fan out and submit one job for each framework version we want to test. We will also be cross-targeting builds. For example we can build on normal linux to target arm linux and then run those tests using Helix. Its estimated to be about a medium amount of work to make the changes required to use Helix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Helix doesn't have the capability to test Apple's new M1 code
I don't think that accurate, we have been using it for some time
| Mac | x64 | Yes | No | Yes, no CI | Yes | Yes, no CI | | ||
| Mac | Arm64 | No | No | No | Yes | Yes, no CI | | ||
| Ios | Arm64 | No | No | No | No | No | | ||
| Ios | x64 | No | No | No | No | No | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: .NET6 will also support iOS on arm
| Windows | x86 | Yes | Yes, no CI | Yes, no CI | Yes, no CI | Yes, no CI | | ||
| Mac | x64 | Yes | No | Yes, no CI | Yes | Yes, no CI | | ||
| Mac | Arm64 | No | No | No | Yes | Yes, no CI | | ||
| Ios | Arm64 | No | No | No | No | No | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iOS is the correct name for the platform
| Ios | x64 | No | No | No | No | No | | ||
| Linux | x64 | Yes | No | Yes, no CI | Yes | Yes, no CI | | ||
| Linux | Arm64 | No | No | No | Yes | Yes | | ||
| Android | Arm64 | No | No | No | No | No | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Android arm is also supported
This is the initial design/proposal doc for our cross platform approach.
I would appreciate it if you could each review it and give me your feedback.