Page 1 of 1

CCR1xxx with containers

Posted: Wed Feb 21, 2024 1:46 pm
by tukan
If I'm able to build a linux with tile-gx support, will my tile CCR1 be able to run containers?

Re: CCR1xxx with containers

Posted: Wed Feb 21, 2024 5:16 pm
by tangent
Do you have a TILE build chain to create the OCI images?

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 2:51 pm
by tukan
Do you have a TILE build chain to create the OCI images?
If I would have it I would not ask the question, would I?

I'm asking if Mikrotik is going to support the Tile architecture as it was bought by Nvidia, which means the end for Tile arch. Mikrotik has invested heavily into this architecture, so the question is will Tile arch get container support? That is simple binary question yes or no (maybe is in this case no ).

Since there are already
ppc64le
or
mips64le
supported I don't see any technical reason why Tile could not be supported. Mikrotik has to maintain the Tile kernel tree themselves.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 3:11 pm
by biomesh
All of the CCR1xxx series have been discontinued. Since they are not an active product any more, I doubt any development time will be spent on new features for that platform. They will be supported (according to Mikrotik) for at least 5 years from date of purchase. These updates do not guarantee new functionality.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 3:51 pm
by tukan
All of the CCR1xxx series have been discontinued. Since they are not an active product any more, I doubt any development time will be spent on new features for that platform. They will be supported (according to Mikrotik) for at least 5 years from date of purchase. These updates do not guarantee new functionality.
I see. Thank you for the information.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 8:58 pm
by kevinds
If I'm able to build a linux with tile-gx support, will my tile CCR1 be able to run containers?
No because Mikrotik never released the Container package for them.. I was not happy about this decision... My CCR1036 and CCR1016 routers with 16GB RAM would have made great hosts..

It seemed to be planned up until 7.0 actually launched..

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 10:17 pm
by tangent
Do you have a TILE build chain to create the OCI images?
If I would have it I would not ask the question, would I?

You've missed the point of my Socratic hint. What I wanted you to think about and realize is that even had MikroTik waved a magic wand and caused container support to appear in the TILE builds of RouterOS, how would you build the OCI images it needs to consume? No images, no running containers.

The current list of available build platforms for Docker is:

{
  "supported": [
    "linux/amd64",
    "linux/arm64",
    "linux/riscv64",
    "linux/ppc64le",
    "linux/s390x",
    "linux/386",
    "linux/mips64le",
    "linux/mips64",
    "linux/arm/v7",
    "linux/arm/v6"
  ],
  "emulators": [
    "aarch64",
    "arm",
    "mips64",
    "mips64le",
    "ppc64le",
    "riscv64",
    "s390x"
  ]
}

(The command that produces that output is "docker run --privileged --rm tonistiigi/binfmt --install all", found here.)

Without TILE CPU support from Docker, there can be no OCI images for RouterOS to consume short of getting someone else (who?) to provide a complete build toolchain.

Note that this also answers the far more common questions about MIPS CPUs. There is upstream support for 64-bit MIPS CPUs, but not for the tiny 32-bit MIPS CPUs MT uses in so many of their products.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 10:23 pm
by kevinds
Without TILE CPU support from Docker, there can be no OCI images for RouterOS to consume short of getting someone else (who?) to provide a complete build toolchain.
Chicken-Egg...

With no Tile systems being able to run docker, there is no reason to create the toolchain.

The system has to be able to run Docker first, otherwise there is no point.. It isn't possible to run without the package, so even if someone provided the build toolchain, it still couldn't work.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 10:36 pm
by tangent
With no Tile systems being able to run docker

Building OCI images does not require Docker Engine to run on the target CPU. BuildKit allows cross-compiling from any supported host, provided you've installed the CPU emulators using the instructions linked from my prior post.

My point is, the set of available emulators for doing this cross-compilation does not include one for TILE CPUs.

In principle, someone could produce the QEMU and binfmt stuff needed to support this, but Docker, Inc. has not done so. It is possible for a third party to do it, but until someone does, there is no point in MT producing a corresponding container.npk package for that platform.

The only hope I see stems from the fact that MT clearly has a TILE CPU cross-compilation toolchain internally, for their own uses. While I doubt that it's set up to integrate with Docker Engine directly as-is, they could do the work to push that up through the QEMU and Linux kernel binfmt projects so that buildx could then consume it. The next question this raises, though, is what is their incentive for doing that for someone else's obsolete CPU architecture?

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 10:55 pm
by kevinds
It is possible for a third party to do it, but until someone does, there is no point in MT producing a corresponding container.npk package for that platform.

The only hope I see stems from the fact that MT clearly has a TILE CPU cross-compilation toolchain internally, for their own uses. While I doubt that it's set up to integrate with Docker Engine directly as-is, they could do the work to push that up through the QEMU and Linux kernel binfmt projects so that buildx could then consume it. The next question this raises, though, is what is their incentive for doing that for someone else's obsolete CPU architecture?
Alright so lets say I do that, because I want to run Docker on my CCR1036, it will do me no good because Mikrotik still won't allow it to run, put the hours in to get it ready just to *hope* it gets enabled??

If the only thing stopping me was me (or someone else) putting in the effort, then it would be worth doing to make it happen.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 11:26 pm
by tangent
I want to run Docker on my CCR1036

Correction: you want to run OCI containers on your CCR1xxx. The tooling produced by Docker, Inc can produce and consume OCI images, but OCI is not "Docker", and Docker isn't the only way to produce these OCI images.

This is not a pointlessly niggly distinction. There is no sensible reason to believe that RouterOS contains any substantial amount of code written by Docker, Inc. employees. There may be some kernel patches and such, but nothing approaching a container runtime. The most barebones expression of Docker's runtime is runc, which is about 11 megs installed on my nearest-to-hand build platform, whereas the container.npk package is about one-hundredth that size.

The next-nearest competitors I'm aware of are crun and systemd-container at about 1.5 megs each. RouterOS's container runtime is stripped-down even by these standards.

If the only thing stopping me was me (or someone else) putting in the effort, then it would be worth doing to make it happen.

Have you tried proposing that to MT and have a rejection message in-hand, or are you presuming failure from the start?

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 11:40 pm
by kevinds
Have you tried proposing that to MT and have a rejection message in-hand, or are you presuming failure from the start?
Have you ever talked to MT support to suggest a feature? How did that turn out for you?

Why/what would I need to propose? That if the package was available, that someone would make it work?

As I said, there is no reason to invest the time when the end-goal can't be achieved, regardless of the time/money invested.

Re: CCR1xxx with containers

Posted: Thu Feb 22, 2024 11:47 pm
by tangent
Why/what would I need to propose?

You'd need to get TILE support into QEMU, then get the Linux kernel's binfmt feature to recognize TILE binaries and send them down to QEMU for CPU emulation. This is how cross-compilation works under both Docker's BuildKit and Red Hat's Podman, at the least.

QEMU does not currently support TILE. I've found some 2015 patches for it, but as far as I can tell, it is not the case that they were released in QEMU for a time, then later removed.

Another way to come to the same conclusion on Red Hat type Linuxes is to say:

$ dnf search qemu | grep user-static

That yields your set of possible cross-compilation targets on that system, available to Podman, the premiere OCI-compatible alternative to Docker.

Have you ever talked to MT support to suggest a feature?

Yes. RouterOS 7.14 contains a change to the container runtime that I arm-twisted them into implementing. It wasn't easy, but it did land.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 1:00 am
by tangent
Oh, and one more minor detail: you'd also need to provide at least one container base image for TILE, without which you wouldn't have the TILE compiler and library binaries for QEMU to run during the OCI image build steps.

Note, for example, that Alpine — a very popular container runtime base — is not yet ported to TILE.

This is all a tremendous amount of work, but if someone were to do it and hand MT a working image — one that instantiated and successfully ran under QEMU — they would have a hard time arguing against producing a container.npk for TILE.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 2:24 am
by kevinds
This is all a tremendous amount of work, but if someone were to do it and hand MT a working image — one that instantiated and successfully ran under QEMU — they would have a hard time arguing against producing a container.npk for TILE.
So.. Tremendous amount of work, to eventually, maybe, get container.npk.. No one is going to undertake that for that maybe eventually.

What is the aguement against releasing it now, knowing no one can use it?

Can't even get features fixed that don't follow the provided documentation. Asking Mikrotik for features is a waste of time, and these are features that are small adjustments for features that already exist.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 2:43 am
by tangent
No one is going to undertake that for that maybe eventually.

Then this "no one" is going to get exactly what they deserve: nothing.

MT's incentive to do all that work is zero. If you don't show MT that it can be done, they won't take the final step of building container.npk on TILE for you.

What is the aguement against releasing it now, knowing no one can use it?

Doesn't that question answer itself? Until you have a build of QEMU supporting TILE, a useful base image for TILE to bootstrap other container images with, and a binfmt patch that ties the two together, a TILE build of container.npk has zero value.

The way I read your replies is that you think all this is MikroTik's fault, which is an odd stance since none but the final linchpin prerequisite projects are run by MikroTik.

Asking Mikrotik for features is a waste of time

Yes, that's why the RouterOS changelogs are zero in length and infinitely far between.

Oh, wait…that's not the case at all, is it?

As I already told you, RouterOS 7.14 not only has a change I wanted, implemented at my direct behest, it's in relation to the container runtime. It can be done.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 3:24 am
by kevinds
Yes, that's why the RouterOS changelogs are zero in length and infinitely far between.

Oh, wait…that's not the case at all, is it?
I didn't say there were no feature updates and changes being made, I said *asking* for a feature change, is a waste of time. Mikrotik does what they want feature wise. Last time I asked for a feature to follow the provided documentation, the offical response was "there are no plans to fix". Is/was a BGP option/setting that worked for IPv4 but not IPv6.

I have accepted that containers will never work on the Tile architecture, but I am still disappointed in the decision.

I am not going to put a thousand hours into a project for Mikrotik to *maybe* allow it after it is working, I am disappointed that Mikrotik pulled support for containers for the Tile architecture.. The higher end CCRs seemed like the ideal place for containers, the CCR1036 and CCR1016 in particular.. Just don't use the MicroSD slot for storage though, it is extremely slow.

I requested that the configuration import/export include hashed user passwords around 6.44, this is a small, minor change.

I've asked for proper changelogs since I started using RouterOS.. That hasn't happened either.. Some changes are listed, many are not.

Lack of changelogs is one of the big reasons I'm trying to get my network off of RouterOS, not sure to what yet, but something else..

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 3:34 am
by tangent
I am still disappointed in the decision.

What I am trying to get across to you is that it isn't MikroTik's decision. They could build container.npk for TILE today, and it would still not get you containers on TILE.

Mikrotik pulled support for containers for the Tile architecture..

"Pulled?" It never existed.

What they did is the same thing I did above: looked for the available tooling, found none, and decided not to waste development resources providing it.

The higher end CCRs seemed like the ideal place for containers

If that's the case, then why didn't Docker do it for all of the other TILE-based host types? Or Podman? Or Rancher?

For that matter, why does the keystone project upstream from them all (QEMU) not support TILE?

Answer: it's a dead architecture with no good reason to keep pouring development effort into it, none of which is MikroTik's fault.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 3:55 am
by kevinds
They could build container.npk for TILE today, and it would still not get you containers on TILE.
Maybe, but without it, it CAN'T happen.
"Pulled?" It never existed.
It was one of the promised v7 features though, up until v7 was released, then it became just ARM/x86.
If that's the case, then why didn't Docker do it for all of the other TILE-based host types? Or Podman? Or Rancher?
I don't work there, I wasn't in those meetings. They have added a number of architectures since they started.

I accept that containers will never exist on Mikrotik's Tile platform, but I am still disappointed by it..

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 4:01 am
by tangent
They could build container.npk for TILE today, and it would still not get you containers on TILE.
Maybe, but without it, it CAN'T happen.

The same argument applies to QEMU TILE support and the requisite base container image needed to bootstrap the first practical image. Why is MT to blame for not providing the last-0.1% bit when none of the rest exists?

"Pulled?" It never existed.
It was one of the promised v7 features though

[citation needed]

They have added a number of architectures since they started.

Yes…after QEMU support for the CPU in question and the base images were provided.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 4:17 am
by kevinds
Why is MT to blame for not providing the last-0.1% bit when none of the rest exists?
Without it there is no reason to put in the effort. It is a waste of time knowing the end goal simply isn't possible regardless of the work put in.

But yes, they stopped making the CPUs last year, it is now a dead architecture.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 4:43 am
by tangent
Without it there is no reason to put in the effort.

The reasons are independent of the existence of a TILE build of container.npk.

I predict that if you take your gadfly routine to the QEMU and Alpine project fora and try to get them to include TILE support, you'll get zero traction, regardless of whether you promise them a container.npk port if they do their part.

MikroTik could call your bluff today, right now, and it would still get you nowhere.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 5:56 am
by kevinds
MikroTik could call your bluff today, right now, and it would still get you nowhere.
Maybe.. Being that it would be my end-goal, I wouldn't start unless I knew it would happen. Because I predict it will never happen, there is no reason to start the project.

As I said in the beginning.. Chicken-Egg...

Do this, this, this, and then this, after all that, *maybe* Miktotik will provide the package.. I estimate a thousand hours to get started and another 1-2 thousand on bugs, for a maybe, so no, not happening. That is a lot of work when I realistically, no positive outcome.

The sad part is, I suspect Mikrotik has most of the needed work, that they already have the toolchains are are using them to compile their software.

I can list a few of reasons why it would be a futile project, not seeing many good reasons to begin.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 7:04 am
by tangent
As I said in the beginning.. Chicken-Egg...

I don't buy that analogy. If that were the case, no one could ever bootstrap a new CPU architecture.

Bootstrapping proceeds in small steps. It's a lot of work — enough that there have been entire companies founded on doing that type of work — but there is never a point where you're in a true chicken-and-egg situation.

MT could doubtless produce an untested build of container.npk for TILE within an hour of deciding to do so; there is no "egg" prerequisite here. While it is true that without the build toolchain, you wouldn't know if that package was of any practical use, there's a reasonable chance that it works without change given that the other platforms are well-tested by this point. If you were to couch your request in terms of an "if it breaks you get to keep both pieces" warranty, for alpha testing only, MT support might well be willing to provide it.

But if they did, what will you have accomplished in winning that gift?

I suspect Mikrotik has most of the needed work, that they already have the toolchains are are using them to compile their software.

The cross-compilers are a small fraction of the needed work.

First let's dispense with the possibility that the requisite C cross-compiler toolchains are proprietary builds provided by Tilera or one of its successors, thus unable to be shared. If that's the case, this project is dead in the water from "go."

If it is instead possible to use something based on Clang for Tilera — last changed 11 years ago, you will note! — then any sufficiently competent software developer could build those tools today, right now, without any help from MT's development or support departments. The how-to is documented.

The next step is to revive the dead tilegx target in QEMU. I have since found out that it was indeed committed, but was removed in 2021. Rolling back to that version's parent and using the given configure command, I get a bogus error about an unknown target name. Attempting to roll back even further fails due to a dependency on Python 2, which I no longer have here. I could spin up a VM based on an obsolete version of Linux to chase this further, but I can't be bothered. It's your itch; you pursue it.

Should you then solve all the tilegx target QEMU build problems and get it working, you're still not done, because you have reached the point where "most of the needed work" does in fact occur: you must port a sufficiently featureful Linux distribution to this emulated TILE platform, without which you cannot build any OCI images. (Again, no images, no running containers.) I suggested Alpine above, being small and popular for the purpose, but it doesn't much matter which one you pick because according to Distrowatch, at least, there are zero existing Linux distros for the tilegx architecture. Your only choice is which one you want to port from scratch, not which one to reuse as your pre-made base.

That brings us to the binfmt piece, which I believe you can temporarily skip by using Buildah instead of BuildKit as it allows you to manually force all commands through your local QEMU tilegx emulator. That lets you run the "native" tilegx builds of Clang under QEMU to build your first tilegx binaries.

It is possible to build and test all this without any help from MT on your desktop/laptop computer platform of choice.

And then, running software in hand, you would finally have a defensible reason to expect MT to test your first TILE container against their internal container.npk builds and work out any remaining bugs.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 8:35 am
by kevinds
It is possible to build and test all this without any help from MT on your desktop/laptop computer platform of choice.
I know.. And going back a few years in the Linux kernel, to when the decision was made to remove it, the kernel supported Tile. 3. somthing. It was pulled because nobody answered that they were still using it.. I have no idea if it would be put back in if people actually started to.. I doubt it, but it is possible I suppose..

I looked into this before v7 was released, when containers were 'still on the table'. Wanted to be ready for when the feature was available.

The only reason I would go down that path again, is to run containers on my CCR1xxx routers.. I don't care if Company X or Y doesn't want to include the tools. The only platform I use containers on is RouterOS, trying to run containers on 'normal' systems annoys me, a lot.

Help from Mikrotik was never needed, just having the option was.
And then, running software in hand, you would finally have a defensible reason to expect MT
You were sounding good up until that point.. I expect nothing from MT, I don't even expect them to fix their bugs because they don't. It is always a pleasant surprise when they do, but no, I don't expect it. One would be a fool to.

Half-baked is how I describe RouterOS.. There is a good start to many features, but so many just fall short of actually being done well, you get used to the bugs and just try and work around them. v7 helped a few compared to v6, but it is largely still the same.

Part of the reason I am looking to get off RouterOS.. The price to performance is the best of any router I've dealt with, along with updates, but there comes a point when I ask 'why doesn't this work properly?" and "Why can't I do this?" (not talking about containers) and I've start to look for another vendor..

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 8:59 am
by normis
What will you run in the containers on the TILE routers? Is there software that would be compatible?

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 9:04 am
by tangent
If the tools described above existed, you could cross-compile anything. Your own PiHole example, for starters.

In case anyone is confused on this point, I'm in support of this idea in principle. My primary question is simply, who's going to do all the work needed to make it happen, given all of what has to be done to support it? Even if we can assume the QEMU, binfmt, and Clang pieces can be ressurrected without a huge amount of effort, I don't think it's fair to expect MT development resources to be spent on an Alpine Linux port to TILE, merely to serve as a build base.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 9:19 am
by normis
AFAIK such tools does not exist, so there is no need for container on TILE. Even if there would be a container package for TILE, there would not be anything that can run there, or that can make stuff run there.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 9:24 am
by tangent
AFAIK such tools does not exist

I outlined all the details for this above. The QEMU emulator for tilegx exists, but is bitrotted; it could be resurrected. A port of Clang to tilegx exists and probably works fine as-is, once built. With those two, one could then cross-compile the Linux distro needed to act as a base image. Given that toolchain, you could then rebuild any portable software atop it.

Therefore, here is my challenge to you: if someone pulled all that together and produced an OCI image that ran under QEMU, would you (MikroTik) be willing to provide a build of container.npk to let that same image run on TILE-based routers?

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 9:26 am
by normis
TILE CPUs are no longer manufactured and kernel support has been discontinued. Is it worth investing time in it now?

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 9:41 am
by tangent
TILE CPUs are no longer manufactured

True, but your customers have bought them, and some number of them still work. I believe the motivation behind this thread is that some subset of these customers want to stretch their useful lifetimes by using them as container runners.

(For what it's worth, I don't own any TILE-based MT products. I'm speaking here from experience building containers and other software, not out of any personal need for this feature to exist.)

kernel support has been discontinued

MikroTik ported the patches necessary forward into RouterOS 7, solving the deployment case.

A TILE-supporting CPU isn't needed at build or emulation time, because containers use the host OS's kernel. Syscalls are resolved by call number, not by memory address or anything else CPU-specific. mkdir(2) is syscall #83 on every Linux distro regardless of CPU type, for example. This allows in-container calls into the kernel to be mapped to calls into the host's kernel.

Is it worth investing time in it now?

My premise is that MT's slice of the work is merely to produce a build of container.npk, then support it in the way they do all other NPKs. Everything else is left up to the FOSS community. Would you be willing to provide that build and chase bugs?

If not, then kevinds is right, and it doesn't matter how much work the FOSS community is willing to do to make this a reality.

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 3:22 pm
by tukan
My premise is that MT's slice of the work is merely to produce a build of container.npk, then support it in the way they do all other NPKs. Everything else is left up to the FOSS community. Would you be willing to provide that build and chase bugs?

If not, then kevinds is right, and it doesn't matter how much work the FOSS community is willing to do to make this a reality.
That is the core of what my first post was about. I did not call the containers "Docker" on purpose. I have done some research before asking the question too. I just wanted to know if MT is willing to publish the *container.npk*, if not then there is no point in trying to do anything as was said in some posts before. Mikrotik has to have the infrastructure done because otherwise they would not be able to create ROS7 for tile.

Qemu for tilegx
Tilera llvm
Cross compilers for both tile (32 bit) tilepro-linux adn tile (64 bit) tilegx-linux

Re: CCR1xxx with containers

Posted: Fri Feb 23, 2024 4:50 pm
by tangent
I just wanted to know if MT is willing to publish the *container.npk*, if not then there is no point in trying to do anything as was said in some posts before.

I don't see a reason to be that black-and-white about it. All we need at this early stage is a pledge from MT that once someone produces a working TILE-based container that runs under emulation, MT will then produce the container.npk package needed to run it natively on the hardware. Having it before we have the toolchain has zero value. It's only necessary to make the last step, from emulation to native execution.

I do understand the argument that someone wouldn't want to start work on producing that toolchain without knowing whether MT will ever provide this, but understand my point in return: all we need at this stage is the pledge, not the actuality.

That said, compiling container.npk for TILE shouldn't be more than an hour's work. The only problem with doing that early is the inverse of the prior problem: without a working container to test whether the built NPK works, MT's software developers and QA staff can do little more than throw it over the wall and pray. This is why I suggested above that it be provided under a non-warranty; "if it breaks, you get to keep both pieces" kind of thing.

Mikrotik has to have the infrastructure done because otherwise they would not be able to create ROS7 for tile.

That only helps them build container.npk for TILE. It doesn't help anyone else build OCI containers that run atop it.

MikroTik don't need a QEMU TILE emulator at all. Maybe they have it anyway, such as to speed the compile-test-debug cycle time, but it isn't a hard requirement for them, as it is with OCI container build systems. If they don't have it ready to hand and freely distributable, someone's going to have to resurrect it, which in turn is likely to require a contemporaneous (e.g. out of support for the last decade) version of Linux. Good luck finding a balance between that and the need for a version of Linux that will run a modern OCI build system. RHEL 7 still ships with Podman 1.6.4, for instance, whereas the current version is 4.9.3.

Following from that lack of a hard requirement for QEMU, MikroTik's developers also don't need TILE binfmt recognition in their development systems' kernel to pass "foreign" binaries transparently down to QEMU. This is the easiest piece to provide, but it'll likely require that you run a custom kernel on your build systems.

I have no internal information, but I fully expect that MT's developers are doing traditional cross-compilation instead, where their C compiler runs natively on x86_64 but generates TILE instructions. Nowhere is QEMU involved in this. What Docker and Podman do is entirely different: they run a native TILE compiler under QEMU to produce TILE binaries. As far as the build environment knows, there isn't any cross-compilation or emulation going on at all; code running inside it thinks it's all running native on TILE.

That in turn means that if MT were to drop their C compilation toolchain for TILE onto a file server in a tarball right this instant, the only use it would have in the project proposed in this thread is to help bootstrap the next step. Eventually that will have to be thrown away because BuildKit and Buildah require a TILE-native C compiler toolchain that produces TILE binaries, not an x86_64 to TILE cross-compiler.

That next step is a doozy compared to all the rest, a full-time project for months: a Linux distro cross-compiled for TILE, for use as a container base image.

Oh, and by "distro" I don't mean "RouterOS". The bare minimum is something like Alpine with enough of the package repo as needed to build the desired containers. Dockerfile RUN commands require a target-native Busybox runtime environment at least, and nearly all will require a package manager of some kind. When that package manager is invoked (e.g. "RUN apk add clang") you not only need it to deliver up a TILE build of Clang, but also all its dependencies: binutils, musl-dev, etc.

You don't need to port the entire Alpine package repo, but you do need to port enough of it to support your desired containers.

And then, once you've done that, you'll want to build a second container, and chances are decent it isn't going to be "FROM alpine…" but "FROM ubuntu…" or something else instead. Now you have a hard choice: either port these other containers from Ubuntu to Alpine, or undertake a repetition of the prior effort, porting Ubuntu to TILE, too.

This is not easy, by any stretch.

Re: CCR1xxx with containers

Posted: Sat Feb 24, 2024 2:34 am
by kevinds
I don't buy that analogy. If that were the case, no one could ever bootstrap a new CPU architecture.
I predict that if you take your gadfly routine to the QEMU and Alpine project fora and try to get them to include TILE support, you'll get zero traction,
It takes a LONG time for any group to accept/support a CPU. Does Alpine run on Apple's M1 CPU yet? Or still only with emulation?

Given the incredibly small usage this would have, I too predict zero traction.
It is possible to build and test all this without any help from MT on your desktop/laptop computer platform of choice.

And then, running software in hand, you would finally have a defensible reason to expect MT to test your first TILE container against their internal container.npk builds and work out any remaining bugs.
Mikrotik won't provide container.npk because there are no containers that will run on the tile architecture.

Mikrotik, even if the containers did exist, can just say "No we are not going to do/provide that" and the discussion reasonably ends.

I (don't know about others) won't put the work into creating a container that will run on the Tile architecture because it won't run on RouterOS.

Re: CCR1xxx with containers

Posted: Sun Feb 25, 2024 7:27 am
by kevinds
a TILE CPU cross-compilation toolchain
I have most of what needed to build the toolchain. Most of the big pieces were provided by the company producing the Tile CPUs, as the company has changed hands multiple times though, some pieces went missing so took a bit to track down some of them.

https://github.com/online-stuff/tilera-toolchain/

Re: CCR1xxx with containers

Posted: Tue Feb 27, 2024 10:32 am
by tukan
All we need at this stage is the pledge, not the actuality.
Yes, we can agree on that. Till now I did not see anything from Mikrotik even the pledge. I tried to search for it but got big nothing.

As for the rest, I have never written that it will be easy peasy, but it is doable.