I did a thing

How AI Helped Me Today (It Was Not By Writing Code)

AI helped me out here and I'm happy with using it like this!

  1. It thought I might have been injecting with the wrong scope (I was not, but a nice refresher)

  2. It suggested some logging things (which did not help, but a nice refresher, and sure... I'll go ahead and put that in)

  3. It thought my tests might be causing issues (they were not, but thanks for confirming how I was using an in-memory DB)

  4. It hallucinated that I had Blazor involved (It was not)

  5. It confirmed that I didn't need logging for `DbContextFactory : IDesignTimeDbContextFactory<BlogContext>` (for migrations) so I'd be fine overloading BlogContext. And yeah... I should really split out identity here for cleaner debugging. I'll take out this stacktrace now, but it was able to confirm that my context init was happening 2x, once for identity.

        public class BlogContext : IdentityDbContext<User>
        {
            private readonly ILogger<BlogContext>? _logger;
            public BlogContext(DbContextOptions<BlogContext> options, ILogger<BlogContext> logger)
                : base(options)
            {
                _logger = logger;
                Guid _instanceId = Guid.NewGuid();
                _logger.LogInformation($"BlogContext created Instance ID: {_instanceId}");
                _logger.LogInformation($"Creation Stack Trace:\n\n{Environment.StackTrace}");
            }
    
            //This allows DbContextFactory to avoid injecting an ILogger because it doesn't need to.
            public BlogContext(DbContextOptions<BlogContext> options) : base(options)
            {}
            public DbSet<BlogPost> Posts { get; set; } = default!;
    
  6. It did eventually notice an obvious missing await that was actually the issue here.

I really like AI as a context specific summary and reminder of stuff I already know and it's OK-ish at troubleshooting 20% of the time even if it might take the LONG way to get there. Since I've been on legacy apps for a long while I'm really enjoying using it as a KB refresh coach.

Oops Forgot About Light Mode

Yup. I forgot about people using light mode. I'll style that later. For now I'll just force Tailwind.

<html lang="en" class="dark">

and tailwind.config.js

module.exports = {     
content: ["./Views/**/*.{cshtml, js}"],     
darkMode: 'class',

Containers Aren't Always Great

Config bites again (And AI was actually helpful resolving!)

The short version is I had a config setup where docker-compose.override.yml in my solution root defining DB host/port/name. This was fine to debug locally but when I tried to add a migration it obviously had issues since the cmdlets don't know anything about the container at all. This was certainly a good idea I had at one point because something like

DB__HOST=host.docker.internal
DB__PORT=5432
DB__DATABASE=blog

Well that makes sense. I'd put that into some Environment variables, and put the Secrets (username/password) into something more secure like (whatever) Vault. But while this works just fine for debugging locally trying to make/run DB migrations doesn't know about the containers. Don't do this I guess.

Copilot was actually helpful tracking down why Entity Frameworks weren't working for me by offering not much actual help, but some good logging spots helped. I actually appreciate this very much as the 2nd good way to use AI (first being "make me this pile of boilerplate code so I don't have to type and copy-pasta it).

Anyway - Config is weird in apps and it's really great when it works like you expect, but there's sneaky stuff in CI/CD (always has been) because there's SO MANY ways to do Environment based config. I've hit many of them before but the config builder in ASP MVC core is crazy in the places it looks. When you add containers on top of this (especially when debugging locally) it can be hard to see where config is coming from because it does its best to look everywhere. Is that a bad thing? Probably not, but it would be nice if there was a quick way to just dump WHERE the config came from.

In order to create DB migrations and update the actual database with the Entity Framework tools or cmdlets I had to add a class that implements IDesignTimeDbContextFactory so the tools can setup the context. This is done the same as it is in program.cs with the builder. I'm only using .AddEnvironmentVariables() and .AddUserSecrets<Program>() but they seemed interesting to add just in case I wanted to handle config differently in the future. Copilot suggested I might want to throw some exceptions with more info which since this took a bit to track down seemed like a fine idea.

    public class DbContextFactory : IDesignTimeDbContextFactory<BlogContext>
    {
        public BlogContext CreateDbContext(string[] args)
        {
            // Determine the path to your project's root directory
            var basePath = Directory.GetParent(Directory.GetCurrentDirectory()).FullName;

            IConfigurationRoot conf = new ConfigurationBuilder()
                .SetBasePath(basePath)  // Use the project's root directory
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddEnvironmentVariables()
                .AddUserSecrets<Program>() 
                .Build();

            NpgsqlConnectionStringBuilder csBuilder = new();

            csBuilder.Host = conf["DB:host"] ?? throw new InvalidOperationException("DB:host is not configured.");
            csBuilder.Port = int.Parse(conf["DB:port"] ?? throw new InvalidOperationException("DB:port is not configured."));
            csBuilder.Database = conf["DB:database"] ?? throw new InvalidOperationException("DB:database is not configured.");
            csBuilder.Username = conf["DB:username"] ?? throw new InvalidOperationException("DB:username is not configured.");
            csBuilder.Password = conf["DB:password"] ?? throw new InvalidOperationException("DB:password is not configured.");
            
            var optionsBuilder = new DbContextOptionsBuilder<BlogContext>();
            optionsBuilder.UseNpgsql(csBuilder.ConnectionString);

            return new BlogContext(optionsBuilder.Options);

Configuration providers
ASP and User Secrets

Ugly Code!

Markdown code blocks in prose are ugly. Nothing burger

So let's grab highlight.js and make them pretty! A bit in tailwind.config.js here to make the weird background gone. It's the darker color behind the highlighted code eww

    theme: {
        extend: {
            typography: (theme) => ({
                DEFAULT: {
                    css: {
                        // Reset code block styles
                        'code': {
                            'background-color': 'transparent',
                            'padding': '0',                        },
                        'pre': {
                            'background-color': 'transparent',
                            'padding': '0',
                        },
                        'pre code': {
                            'background-color': 'transparent',
                            'padding': '0',
                        },
                    },
                },
            }),
        },
    },

And moved my styles into _Layout.cshtml (kind of)

    <link rel="stylesheet" href="~/css/site.css" asp-append-version="true" />
    <link rel="stylesheet" href="~/css/tailwind.css" asp-append-version="true" />
    @RenderSection("Styles", required: false)
</head>

Just to make sure the order is correct so tailwind doesn't override.

Finally on the page:

@section Styles {
    <link rel="stylesheet" href="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/styles/base16/dracula.css">
}
@section Scripts {
    <script src="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/highlight.min.js"></script>

    <script src="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/languages/csharp.min.js"></script>
    <script src="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/languages/json.min.js"></script>
    <script src="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/languages/javascript.min.js"></script>
    <script src="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/languages/http.min.js"></script>
    <script src="https://unpkg.com/@@highlightjs/cdn-assets@@11.11.1/languages/typescript.min.js"></script>
    <script>
        hljs.highlightAll();
    </script>

I think that's enough support for now to keep it smaller.

OCR Implementation

The Service is pretty straight forward, it's nearly exactly the sample code from MS.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;

namespace slipsec.dev.Services
{
    public class OcrService : IOcrService
    {
        private readonly string _endpoint;
        private readonly string _key;
        public OcrService(string endpoint, string key)
        {
            _endpoint = endpoint;
            _key = key;
        }
        public async Task<List<string>> ProcessImageAsync(Stream fStream)
        {
            ComputerVisionClient client = Authenticate(_endpoint, _key);
            List<string> results = await AnalyzeImage(client, fStream);
            return results;
        }
        public static ComputerVisionClient Authenticate(string endpoint, string key)
        {
            ComputerVisionClient client =
              new ComputerVisionClient(new ApiKeyServiceClientCredentials(key))
              { Endpoint = endpoint };
            return client;
        }

        public static async Task<List<string>> AnalyzeImage(ComputerVisionClient client, Stream fileStream)
        {
            string operationLocation = String.Empty;

            /*           using (FileStream stream = new FileStream(file, FileMode.Open))
                       {
                           var x = await client.ReadInStreamAsync(stream);
                           operationLocation = x.OperationLocation;
                       }
            */

            var x = await client.ReadInStreamAsync(fileStream);
            operationLocation = x.OperationLocation;

            // Retrieve the URI where the extracted text will be stored from the Operation-Location header.
            // We only need the ID and not the full URL
            string operationId = operationLocation.Substring(operationLocation.Length - 36);
            ReadOperationResult results;

            do
            {
                results = await client.GetReadResultAsync(Guid.Parse(operationId));
            }

            while ((results.Status == OperationStatusCodes.Running ||
                results.Status == OperationStatusCodes.NotStarted));
            return ExtractText(results);
        }

        public static List<string> ExtractText(ReadOperationResult results)
        {
            List<string> lines = new List<string>();
            IList<ReadResult> rResults = results.AnalyzeResult.ReadResults;
            foreach (ReadResult rResult in rResults)
            {
                foreach (var line in rResult.Lines)
                {
                    lines.Add(line.Text);
                }
            }
            return lines;
        }
    }
}

From here it is just a matter of adding an interface:

namespace slipsec.dev.Services
{
    public interface IOcrService
    {
        /// <summary>
        /// 
        /// </summary>
        /// <param name="imageData"></param>
        /// <returns>A list of the lines found by the OCR</returns>
        Task<List<string>> ProcessImageAsync(Stream imageData);
    }
}

And then handling the injection in Program.cs:

builder.Services.AddScoped<IOcrService, OcrService>(provider=>
    new OcrService(conf["OCR:AzureVisionEndpoint"], conf["OCR:AzureVisionKey"])
);

There's a ton of ways to to config in .NET but I like this one. I'm just doing environment variables in docker for deploy and for dev using the secrets manager.

I didn't use JSON for now but have grouping for things if I decide to move that direction.

  "DB:username": "redacted",
  "DB:password": "redacted",
  "OCR:AzureVisionEndpoint": "https:redacted//r/",
  "OCR:AzureVisionKey": "redacted",
  "Authentication:Google:ClientId": "redacted.apps.googleusercontent.com",
  "Authentication:Google:ClientSecret": "redacted"

I should have gone full TDD and put in some tests first but better late than never. I have yet to refactor the repository.

        [Fact]
        public async Task ShouldProcessImageAndReturnPlaylistEntries()
        {
            // Arrange
            _ocrServiceMock.Setup(x => x.ProcessImageAsync(It.IsAny<Stream>())).ReturnsAsync(new List<string> {
                "https://youtu.be/testUrl1", 
                "https://youtu.be/testUrl2" 
            });
            var controller = new OcrController(_context, _ocrServiceMock.Object);
            var stream = new MemoryStream();
            var testFile = new FormFile(stream, 0, stream.Length, "file", "file.jpg");

            // Act
            var result = await controller.ProcessImage(testFile, new OcrModel()) as ViewResult;

            // Assert
            var returnValue = Assert.IsType<OcrModel>(result?.Model);
            Assert.Equal(2, returnValue.PlaylistEntries.Count);
            Assert.Equal("https://youtu.be/testUrl1", returnValue.PlaylistEntries[0]);
            Assert.Equal("https://youtu.be/testUrl2", returnValue.PlaylistEntries[1]);
        }

And of course the controller:

        [HttpPost]
        public async Task<IActionResult> ProcessImage(IFormFile img, OcrModel model)
        {
            var filePath = Path.GetTempFileName();
//TODO: multiple files?
            Stream stream = img.OpenReadStream();
            List<string> lines = await _ocrService.ProcessImageAsync(stream);
            foreach (string line in lines)
            {
                model.PlaylistEntries.AddRange(
                    line.Split()
                    .Where(x => x.Contains("youtu")) //Business logic in the controller because this is only my personal blog.
                    );
            }
            using (MemoryStream ms = new MemoryStream())
            {
                await img.CopyToAsync(ms);
                model.Image = ms.ToArray();
            }
            return View("Index", model);
        }

This is a very MVC way of doing things and there's really no need to pass the images back and forth like this when it should really be a fetch() and just update what's needed not the whole page. That's a good future refactor but here's to delivering working software today.

OCR 1

My kids have a homeschool curriculum and it includes a bunch of links to educational videos on youtube. Typing these in by hand is a pain. I tried a couple popular OCR Nuget packages like IronOcr and Tesseract, but they didn't work very well only managing to correctly identify maybe 2/3 of the links at best.

I finally setup and tried MS Azure Computer Vision (their AI OCR product) and it worked flawlessly even on less-than-great quality photos of the page from the book.

So I got to do some refresher stuff:

  1. Working with images

  2. Dependency injection for Services (Azure AI and YouTube )

  3. Unit Testing

I will detail how I did everything in future posts but here's what it came out like for now: My implementation

What's My Stack?

My home stack? Nothing too fancy.

My old desktop is my only server, and has backup storage connected. It runs Home Assistant in a KVM/QEMU VM, and my website in docker. MQTT in another container to allow it to talk with my ratgdo. Tailscale to allow my phone to VPN home so I can connect to HA. VLAN to add some semblance of security to IoT.

Some Raspberry Pis Docker to run some chat bots for a Discord group or two, and random projects.

I don't have much installed on my workstation but often open are: OneNote, Visual Studio, Visual Studio Code, pgAdmin, Docker Desktop, Firefox, Chrome, Bitwarden, Discord, Signal, Telegram, maybe Spotify. I exist on Xitter, Bluesky, and Mastodon, but not frequently.

My website runs on docker as well but I moved it off of the Pi. The rest of the stack is:

  1. Backend

    1. Fronted by CloudFlare

    2. Served by Kestrel

    3. Standard dotnet docker images so based on Alpine

    4. Markdig for markdown->html rendering

    5. PostgreSQL database

    6. SSO via ASP.NET Core Identity

  2. Frontend

    1. Stacks Edit (From StackExchange) for client side markdown editor.

    2. Tailwindcss (and PostCSS) for styling

    3. Webpack, but likely soon moving to Vite for build.

    4. asp-client-validation so I could remove jQuery entirely.

Frontend Jungle

When I was working to replace Bootstrap and remove jQuery from my blog here (ASP MVC). My motivation was to get back up to speed on what's going on in the frontend world and wow is it wild! I've never thought of myself as a front-end dev, but that that doesn't mean I haven't touched it. Mostly to support some legacy applications that used jQuery, Bootstrap, or plain javascript and CSS.

So, at the very least… I wanted to get my feet into some TypeScript. So that's what I'm doing to load StacksEditor (the markdown editor StackOverflow and friends use) and also the replacement for jQuery unobtrusive validation.

But the real jungle of frontend stuff is the tools and frameworks. When I last traveled here it was Grunt and Gulp, maybe Browserify and webpack. My how things have exploded in this space.

Bundlers and builders:

Webpack is still going strong. Huge community, huge number of places it's being used. It has a million plugins that can do just about anything you could ever want in a build. But… like other bundlers and tools written in javascript, it's not going to be the FASTEST dog in the race.

Speaking of fast, Parcel is written in Rust so it's… Fast.

Snowpack is dead, and suggests moving to Vite. It seems to have the same idea with unbundled for dev much like esbuild.

esbuild is up next. It's written in Go and is faster than parcel. It's NOT a bundler though, it's just a transpiler.

Turbopack is basically just webpack but in Rust. It seems to be slowly getting there.

Rollup is what Vite uses for dev serving. It's fast, and does good for prod builds because it's a capable bundler, with tree-shaking, code splitting and the other goodies you'd expect.

Vite is seemingly the newest kid on the block, and it uses 2 transpilers- esbuild for dev (FAST), and Rollup for prod. That's great for productivity but bad for the obvious reason that dev is not the same as prod, and prod builds aren't fast like dev. It's also the bundler for React so it's everywhere and has tons of traction.

Rolldown looks like it's getting close, and likely the path forward for Vite replacing both esbuild and rollup to be the best of both worlds. Of course it's written in Rust.

Since I'm writing typescript now I have to use some sort of tool. For the time being I've… done some research obviously, and settled on good old webpack. I'm a small app, and it's most likely what I'll encounter at job[next] so it's probably the best to brush up on, but I will be exploring Vite soon.

But wait there's more!

Runtimes:

Node.js isn't alone anymore as a javascript runtime, and that's probably good. It's mostly C++ and that's not sexy anymore. Now there's also:

Dino: by the same author addressing things he wished were done differently and lessons learned, still runs on V8.

Bun: using JavaScriptCore from Webkit not V8, and written in Zig instead of Rust. What's Zig? Oh it's kind of like Rust but not. It offers some of the direct memory control like C/C++ and some of the type checking of Rust. One key point is it only checks libs with generics against stuff using them for type safety where Rust proves function taking a generic is safe for any value passed. Oh and now it's turning into a bundler too. Because of course there's overlap in all these things.

And I still need to take a closer look at the css (transpiler?) tools and linters, but first I'm moving my site to Tailwindcss so… more on that next. It's basically a big postCSS plugin so all the other bundlers and builders can work well with it.

I love OSS, but this is why I love dotnet... The MS option has the weight of MS behind it and it might take a hot second longer to do the latest thing, but at least you don't have to fight with a whole new build process every time you turn around. Speaking of that, create-react-app is an excellent example. Top google answer for how to get started with React. Top AI answer too. FINALLY it's been updated so that it tells you that this isn't the way things are done anymore, and hasn't been for quite a while. The entire reason it existed in the first place is because it's PAINFUL to get through this jungle of options just to get started. I plan to update this with the inevitable new things that will be the new front-end hotness.

Markdig and Tailwind typography

I switched from front-end markdown rendering with markdown-it to markdig and the typography plugin for tailwind.

Install the npm package, update tailwind.config.js to add it.

    plugins: [
        require('@tailwindcss/typography'),

Then I used markdig to convert it.

            MarkdownPipeline pipeline = new MarkdownPipelineBuilder().UseAdvancedExtensions().Build();
            posts.ForEach(post => {post.Body = Markdown.ToHtml(post.Body, pipeline); });

The advanced extensions covered everything I needed for bold strikethrough and the like. I'm still using stacks-editor for writing but I don't think I'll keep it for too long. The preview is great, but I'm not sure I like the way it has to be styled and couples the css. I do like the image upload too but we'll see. Here's a bunch of what it looks like:

asdf

asdf

asdf

bold

ital

striike

sdf is inline code

link

This 
is
a
code block
  1. numlist
  2. two
  3. three
  • ul:
  • two
  • three-

your text

Ubuntu and Home Assistant OS.

Not too much to say here, I moved from a docker Home Assistant to HAOS. Mostly because of the things you just can't do with HA on a container (add-ons). So if I don't want to run another container for Mosquitto to use MQTT so my ratgdo can talk to Home Assistant and would rather just have it as a HA Add-on. Well now I can.

  1. Create a VM (I just used the GUI tool for KVM/QEMU

  2. Set the disk to the latest qcow2 disk image from found here.

  3. Make sure to set the bios to UEFI

  4. Onboard HA as usual (or restore the backup from the docker version I made)

I ran this in bridged mode just so I didn't have to worry about port forwarding, so here's a bunch on that, and some for KVM/qemu. It sure has been a while since I dug much into Linux back in my gentoo days. Had to brush up on some systemd, but happy to avoid iptables port forwarding which I haven't touched since FreeBSD. There sure is a lot of ways to do network config now.

https://fabianlee.org/2022/09/20/kvm-creating-a-bridged-network-with-netplan-on-ubuntu-22-04/
https://netplan.readthedocs.io/en/latest/netplan-yaml/
https://ubuntu.com/server/docs/configuring-networks
https://docs.oracle.com/en/learn/ol-nmcli-bond/#objectives
https://drewdevault.com/2018/09/10/Getting-started-with-qemu.html
https://github.com/zjagust/kvm-qemu-install-script
https://zacks.eu/kvm-qemu-installation-configuration/#default-net
https://ubuntu.com/blog/kvm-hyphervisor
https://blog.mzfr.me/posts/2019-11-16-interface-names/
https://askubuntu.com/questions/704361/why-is-my-network-interface-named-enp0s25-instead-of-eth0
https://people.ubuntu.com/~slyon/netplan-docs/examples/

Dotnet on docker on Raspberry Pi

I tried to run on an old pi, and while it works fine it's just overloaded with 3 apps and Home Assistant. Caveat is that it does need to be built for arm64.

dotnet publish --os linux --arch arm64 -c Release /t:PublishContainer /p:PublishProfile=./slipsec.dev/Properties/PublishProfiles/ghcr.io.pubxml .\slipsec.dev\slipsec.dev.csproj