← Back to Blog Our Take

Hackathon ML.NET & ONNX: Building an Image Classifier Under Pressure

September 19, 2025 By James
Hackathon ML.NET & ONNX: Building an Image Classifier Under Pressure

Every year, I attend my university alma mater’s hackathon as a mentor, where I pitch a project idea to students and construct a team. Hackathons are a mixture of adrenaline, caffeine, and problem-solving under pressure. As a .NET developer, I had a plan (or so I thought) to leverage ML.NET and speed things up with a powerful shortcut, transfer learning, for image classification.

But as we quickly learned, sometimes accuracy just doesn’t cooperate when the clock is ticking.

Clock is Ticking

Our team needed an accurate image classifier, and we needed it fast as this was only 1 of 2 models we needed to build in 36 hours. We dove right in with ML.NET’s built-in transfer learning trainer, MulticlassClassification.Trainers.ImageClassification. After several hours in, accuracy just wasn’t good enough for one of our models. Far from good actually, and we needed to pivot.

Magic of ONNX

ONNX (Open Neural Network Exchange) is an open standard that lets you use machine learning models across different frameworks - think of it as a universal adapter for ML.

We decided to forfeit training this model altogether in favor of something prebuilt, but the next question was how? Running out of time and nearing the completion of our Blazor server app that would stream images to the backend, we just needed a model. That’s where ML.NET shines - I didn’t need to abandon my .NET app, or spin up another API in Python. I simply ingested a prebuilt, off the shelf, ONNX model suited for our needs into our Blazor app. While ML.NET has its own trainers, its real strength here was interoperability - letting us drop in a prebuilt ONNX model without rewriting our app. With just a few lines of code, I had a reliable and accurate model in my .NET app with fast inference.

using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;

// Load the ONNX model into an inference session
using var session = new InferenceSession("model.onnx");

// Prepare input tensor
DenseTensor<float> inputTensor = PreprocessImage(imageData);
var inputs = new List<NamedOnnxValue>
{
    NamedOnnxValue.CreateFromTensor("input", inputTensor)
};

// Run inference
using IDisposableReadOnlyCollection<DisposableNamedOnnxValue> results = session.Run(inputs);

// Extract output
var output = results.First().AsEnumerable<float>().ToArray();
Console.WriteLine($"Prediction: {string.Join(", ", output)}");

In the end, we created an impressive app while never needing to leave the .NET ecosystem. ONNX gave us that bridge, and ML.NET made it effortless to integrate.

Takeaway

Hackathons reward speed, but production apps reward flexibility. With ML.NET and ONNX, .NET developers don’t have to choose - you can move fast, stay accurate, and remain firmly in the .NET ecosystem.

Ready to Build Something Amazing?

Let's discuss how Aviron Labs can help bring your ideas to life with custom software solutions.

Get in Touch