Bridge network does not allow containers to talk to each other?

I have lidarr, sonarr, transmission, etc.

To connect those to each other it would be best if I can use the internal host, like “transmission:port”. The internal IP addresses seem to change when I restart the server or sometimes when I update a docker container. My network is set to bridge for all apps. How do I use the IP-independent addresses to refer to containers?

Note: I am also using cloudflared. Could it have something to do with that?

Welcome to he Forum. Yes, they can talk to each other if on the same bridge network. Example you have Mariadb installed on the mariadb bridge. If you install another docker that needs to talk to the DB, put it on the same bridge network. And then they can. You can also connect to each other by hostname. In settings go to terminal and type hostname. Then use that hostname.

It would be nice if we could set static IP in that bridge to keep them consistent the same. Actually maybe someone knows how to do that.

Hi there, thank you for your response.

I have set all the apps to run through the same bridge. However, they are unable to find each other. Maybe another option would be ‘host’, but that seems kind of insecure.

I have cloudflared running through the ZimaOS app, could that maybe have to do with it?

Here are some screenshots of the settings of some of these apps. As you can see they all run on bridge.





So, yes the IP’s can end up changing, and apparently so can host names. What won’t change is the DNS and Aliases associated. I’m going to provide a couple commands that can pull dns, pull alias, or all of the networking information on a specific docker image, or against all of them. Look and you can see where I specifiy my network name of mariadb where I have mysqlworkbench, and spoolman containers that use the mariadb These are by ssh, and do require sudo

sudo docker inspect --format='{{json .NetworkSettings.Networks.mariadb.DNSNames}}' $(docker ps -aq) | jq >/DATA/Documents/mariadbnetDNS.txt
sudo docker inspect --format='{{json .NetworkSettings.Networks.mariadb.Aliases}}' $(docker ps -aq) | jq >/DATA/Documents/mariadbnetAliases.txt
sudo docker inspect --format='{{json .NetworkSettings.Networks}}' $(docker ps -aq) | jq . >/DATA/Documents/dockercontainersnetworkinfo2.txt

You don’t have to send it out to text files, I just find it good to review it in a text file.

I have found that the neither the dnsname or aliases change on updating settings, updates, or reboots. Where hostnames, and ip addresses have changed at times.

I’d actually love to find a way to make the IP’s static. But I think the alias or dns works great.

!!! Very long post !!!

I think we are talking about different things.

I need to be able to go into radarr’s settings, and for transmission as a download client set the host to ‘transmission’. So https://transmission:9091 for example. The browser client will then check the connection from within the container:

That works in a normal Docker/Docker Compose environment like Dokploy/Coolify when the bridge is the same (and user-defined, which might be the problem, read on for that), but it somehow doesn’t work in ZimaOS. It seems the bridge is not actually accepting cross-container calls.

When I run ‘docker inspect bridge’ in my Zima environment I get the following:

[
    {
        "Name": "bridge",
        "Id": "1071a3d93dfe4b31df00e67aa3b0c301b9483394a8122146b5baaf34d2907066",
        "Created": "2025-11-19T21:23:30.915193846+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "08e2c23baaad1cc444a43545daad0f688724869052b2f5bccff9b8cc46b188e6": {
                "Name": "sonarr",
                "EndpointID": "f1b3990f4d05f6e75c342113b1d363221925b11b03be72a66a60bee9ac52dc61",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.16/16",
                "IPv6Address": ""
            },
            "0c5b7c1ee163bc69f726024ea28bed90398dc8ba6abf6bdf3f1e265af48b7e12": {
                "Name": "lidarr",
                "EndpointID": "6948b6830c89ca158da7f120f72bc383a5f2c56dbd035b2ef297e07d1d5a8ebf",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.9/16",
                "IPv6Address": ""
            },
            "2474670ce3014514f48b1bc5c30c4709a86dc86751d23dc271fa537e7928ad44": {
                "Name": "emby",
                "EndpointID": "6507d795587349171cc825821107a8b5ea59a622502cb87d81519f3aaef7b081",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.20/16",
                "IPv6Address": ""
            },
            "2d0803f8625b658b9b1bfa4f79876fb152c53f61308a1faf474b1d607432dd48": {
                "Name": "photoprism",
                "EndpointID": "63d58d401753aafee2dd7885d2d03cdc753e0949ed69aed2df905bd9c38c2173",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "46bed58b4fb8ad4b8318f38caef3946360269120e253e67c0961f40c7aadeed7": {
                "Name": "nzbget",
                "EndpointID": "ec5839b92987ae9c78b829a4ababbd666acd04b1307b29e07c7063e6ad68dd89",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.18/16",
                "IPv6Address": ""
            },
            "4ecc015d52b4322a88cee31fe4d586193b60dbe580dfd4f7b2a407e3d6dba8ef": {
                "Name": "fileflows",
                "EndpointID": "e487cb451e23e6027c57c7e91a6323eb51fa915f6bf7f580cd2b66fced0d3890",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.15/16",
                "IPv6Address": ""
            },
            "53602a59d19766e638735bc965aa7da90724dd03b7074a9dce93db6d64056083": {
                "Name": "obsidian",
                "EndpointID": "20604f64b40dad68c9cd1235018980bd99758afcc5c2660716530bc70029fdcc",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.19/16",
                "IPv6Address": ""
            },
            "577296d9315767e61172acfeaa29171d53e91bfaf0db20dd002e953a30fd4dba": {
                "Name": "jellyseerr-jellyseerr-1",
                "EndpointID": "6758972b969ac7fa573876060f97c5683e041167f3a4252e7212143a5729fde2",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "6128f4561a2063a4965d7a3c5d211fa09128ff8e6f69f4c28e4a99934da3b4c8": {
                "Name": "radarr",
                "EndpointID": "a251e5bb9f294a0ab0c68024f2ed592dc8d5f467d414bd565df6c5c57330cfab",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.7/16",
                "IPv6Address": ""
            },
            "7aeae6b04499e0b0c9b25ea9d5f4621dc1134f737edb41182ea3f865ea63d05f": {
                "Name": "prowlarr",
                "EndpointID": "1e2a9c91d0f9a625cc4f0f674e862f9798676dcf31d815f3a8dc2855a4513d29",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.11/16",
                "IPv6Address": ""
            },
            "83275cd161263bf38dd8d9d8f3e74329bfe63aeb3638657ecd854ce5e64e40e2": {
                "Name": "transmission",
                "EndpointID": "8d56d399c0b893d9e977ff5d171921ccbb1ae6977285378bfbf2d4cd4ad00d08",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.14/16",
                "IPv6Address": ""
            },
            "874d64efbe28106b8b1a97f0a3069842d95224e5cd7a849217ee4d3517ab1d3f": {
                "Name": "qbittorrent",
                "EndpointID": "12d84f25a96cf35625ee6cd592f6cef6de219fc0a3db4cfc343ea6acee63e231",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.10/16",
                "IPv6Address": ""
            },
            "ab08d2ff4d8003f9d39b1e6d3cedc843b817493252d7da188cb562d423849855": {
                "Name": "mylar3",
                "EndpointID": "06a6105a5b516a659d41d2f1b2aa4b0f06ceb3d49f2d2a50cfc83bf514a63572",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.17/16",
                "IPv6Address": ""
            },
            "c713dd75e566867340537f5c853b73f719fe026f4addfde968a15984b93e3b73": {
                "Name": "sabnzbd",
                "EndpointID": "d718fc098ffb74855dd7203fa7c855131e5ea8d541721cc4fbf47600e783461f",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "dd288809b369b56e7b389a98137225b573f6fbb11dd62369b96a2e3c40504c85": {
                "Name": "jellyfin",
                "EndpointID": "d1a65f53237f95535f8651d066f3ea338d0fd3f003ad9a2f3efd688e577f29d3",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.13/16",
                "IPv6Address": ""
            },
            "e1c2bcc9745bcd1762be505d79f9bdb5be830710188f6737863f90930500b4d9": {
                "Name": "readarr",
                "EndpointID": "8d4f2c1d772131541ac2c0ac695c0b28eaed9991fb373f555d2c4acc19cb28df",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.12/16",
                "IPv6Address": ""
            },
            "f5ab229bdcfd3f9ee1797215d29eaf0a0aac04784295cbc18b6deaa1040e52d2": {
                "Name": "bazarr",
                "EndpointID": "0eb03770ff9b5be99635afdca64e45fdb18e9cf1ac16b27da37239749a569950",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "f65853031741b28321b5d4acfbf038f3bdaf7d6ebeb49136a072f50cc5846dff": {
                "Name": "deluge",
                "EndpointID": "8ce80dc8f597c5a934e1db3036ab7f230330d1bb92af29503c13d09af015595a",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.8/16",
                "IPv6Address": ""
            },
            "fd90ff7b18093b22c8fca7ceb7fc5c0d06d409afc3181651b8b6d74008490b1d": {
                "Name": "autobrr",
                "EndpointID": "14115016aafd2f1106c166600de162fd2bad1823e8293a4f51e0e4ab87d8e662",
                "MacAddress": "▊▊▊▊▊▊",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

So all the containers are in the same network, with their names being as expected (normally also the internal host name)

But if I then go to the radarr service (at port 7878 TCP), open the terminal, and ping transmission (at port 9091 TCP / 51413 TCP) I get no results:

root@6128f4561a20:/# ping transmission
ping: bad address 'transmission'
root@6128f4561a20:/# ping transmission:9091
ping: bad address 'transmission:9091'
root@6128f4561a20:/# ping transmission:51413
ping: bad address 'transmission:51413'

Or, externally, using docker exec:

sudo docker exec -it radarr sh
root@6128f4561a20:/# ping transmission
ping: bad address 'transmission'

But pinging the IP address does work:

root@6128f4561a20:/# ping 172.17.0.14
PING 172.17.0.14 (172.17.0.14): 56 data bytes
64 bytes from 172.17.0.14: seq=0 ttl=64 time=0.060 ms

When inspecting my radarr service these are the NetworkSettings.Networks:

"Networks": {
  "bridge": {
    "IPAMConfig": null,
    "Links": null,
    "Aliases": null,
    "MacAddress": "▊▊▊▊▊▊",
    "DriverOpts": null,
    "NetworkID": "1071a3d93dfe4b31df00e67aa3b0c301b9483394a8122146b5baaf34d2907066",
    "EndpointID": "a251e5bb9f294a0ab0c68024f2ed592dc8d5f467d414bd565df6c5c57330cfab",
    "Gateway": "172.17.0.1",
    "IPAddress": "172.17.0.7",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "DNSNames": null
  }
}

and for Transmission:

"Networks": {
  "bridge": {
    "IPAMConfig": null,
    "Links": null,
    "Aliases": null,
    "MacAddress": "▊▊▊▊▊▊",
    "DriverOpts": null,
    "NetworkID": "1071a3d93dfe4b31df00e67aa3b0c301b9483394a8122146b5baaf34d2907066",
    "EndpointID": "8d56d399c0b893d9e977ff5d171921ccbb1ae6977285378bfbf2d4cd4ad00d08",
    "Gateway": "172.17.0.1",
    "IPAddress": "172.17.0.14",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "DNSNames": null
  }
}

So, in summary:

  1. All containers are in the same network
  2. The subnet is the same
  3. Inter-Container Communication (ICC) is enabled
  4. The containers are able to communicate through IP
  5. BUT they cannot communicate using their hostnames

I have also tried to setup a custom user-defined network using Portainer but whenever I try to switch to that network using ZimaOS’ settings for a specific service I get an error saying network internal-network was found but has incorrect label com.docker.compose.network set to ""

So I’m at a dead end it seems, where using an IP to fetch from between containers is impossible due to their changing nature and I can’t fetch through aliases/mac addresses

I have the same problem. Tried every possible way to set the hostname but it is not working. Last try was i downloaded the yaml, edited the hostname and set up a new docker container with this file. hostname is setup right after this, but when i open the terminal and type “hostname” it is “r4234fdd33” (example). That is a bit frustrating, because i can’t use nginx proxy manager / haos / mqtt / z2m with that.

What’s actually happening

I believe this isn’t a true “containers can’t talk to each other” problem. It is a Docker name-resolution issue.

On the default Docker bridge network, containers can usually communicate by IP address, but they cannot automatically resolve each other by container or service name. That is why:

  • Pinging by IP works
  • Pinging by container name fails
  • Everything looks broken, even though basic networking is working

I think this confuses a lot of users because it feels like a firewall or permissions problem, but it is actually just how the default bridge network behaves.

Why this happens on ZimaOS

I believe ZimaOS installs apps on the default Docker bridge network unless you change it manually. That network does not provide built-in DNS between containers.

This means containers can exist on the same subnet and reach each other by IP, but they cannot resolve names like “radarr” or “transmission” unless a different network type is used.

I do not believe this is a bug in the containers themselves. I believe it is simply a limitation of the default network choice.

What I suggest instead

I suggest using a user-defined custom bridge network instead of the default one.

On a custom network, Docker provides internal DNS between containers. This allows containers to find and communicate with each other using their names instead of IP addresses. This is the correct and clean Docker-native solution for this type of setup.

How to do this in ZimaOS without using terminal commands

Go to the app in ZimaOS and open the Edit or Advanced settings.

Locate the Network option. Instead of leaving it on “bridge”, select Create new network or Custom network.

Give the network a simple name such as zima_internal or media_network.

Apply the same custom network name to every app that needs to talk to each other. This part is critical. All related apps must be on the exact same network.

Save the changes and restart the apps.

Once this is done, you should be able to use container names instead of IP addresses. For example, using “transmission” instead of an IP address in Radarr settings or vice versa.

My final conclusion

I do not think this is a firewall issue.
I do not believe your containers are broken.
I believe the problem is caused by using the default bridge network which does not support automatic name resolution.

As soon as both apps are placed on the same custom network, I believe the problem will disappear.

Here are solid references that support what I believe and what I suggest:

1 Like

You can’t create a new network in the zima ui ! I created a network with:

docker network create example_network

After that i can see and change to that network, but when you

docker exec -it container_name /bin/bash

and call “hostname” it says “ZimaBoard2”. It is still the hosts network ! This is clearly a bug in zimaos. Also, when you

docker container inspect example_container

you see all the failures which are done by zimaos. :frowning: It is really unusable at the moment.

add:

I created an docker compose file with

bla…
networks:
- example_net

networks:
example_net:

This also doesn’t work. You end up with an network e.g.: “holy_fu*ing_example_net” and the next container does the same. So you also get two different container networks.

If you create a network on the cli and integrate it in the docker compose file, it literally says that it can’t set the network and uses “” !

Hey, I just did the test. You need to switch to root to create a Docker bridge network.

docker network create yourNet1

After that, you should be able to choose the network you create in the WebUI.

Hope this is helpful.

Or you can just wait.

My network from Portainer showed up by just waiting a bit but Zima kept complaining about it not being a proper network since it has no origin in a docker-compose/container. Also, it still does not allow you to use domain names.

Best would be:

  1. If Zima automatically aliases things so container ‘a’ gets a / a.local as domain in a custom network
  2. If There was an app store app (called “Bridge” for example) that just creates a network and a monitoring interface (network stats, network clients, something like that) and allows other apps to connect to that so they can use hostnames.

For now, I’ve just set my router to give Zima a static IP and have set all the other to call each other through the network ip with the specific app port (since that never changes). This severely limits transfer speeds though, since it clogs up my home network instead of keeping communication within the machine.

As said, I have also tried to use a user-defined network.

  1. You cannot “Create new network or Custom network.” from anywhere in the Zima UI. That feels like something AI hallucinated for you.
  2. Setting a custom network (in my case created through Portainer) makes Zima complain about it being disconnect from a container/service. Setting it through the main interface doesn’t save it in most cases. When it does work, it disconnects the application, resets ports in some cases and still doesn’t let you connect to other apps (only way I found is to add apps through Portainer which defeats the purpose)
  3. Some apps have multiple containers that need to be connected to the same network to talk to each other. Changing them to the user-defined network kills that functionality somehow even though everything is connected to each other.

I believe that in a normal Docker environment, user-defined networks work exactly as expected. Containers get proper hostnames and internal DNS, and name-based communication works.

I think the problem here is that ZimaOS sits in the middle and interferes with, overrides, or simply fails to correctly apply that configuration. Because of that, custom networks behave like they aren’t really being respected.

Right now, I believe the only stable workaround is using the host IP with fixed ports, even though it’s not ideal.

I believe you’re technically correct that a Docker bridge network needs to be created as root, and I’ve already tested that part.

The issue is that even after creating the network, and even when it appears in the ZimaOS WebUI, ZimaOS still treats it as an external or disconnected network. In practice it doesn’t get applied properly to the containers. Hostnames and internal DNS still don’t behave like they do in a standard Docker environment, and in some cases the network change breaks multi-container apps or resets ports.

So while the network technically exists and is selectable, I don’t think ZimaOS is actually respecting or integrating user-defined networks correctly at the moment. That’s why containers still behave as if they’re on the default or host network.

I believe the underlying problem isn’t network creation, but how ZimaOS applies and manages those networks after they exist.

The Problem is the underlying casaos framework between zimaos and docker. You can see, when you inspect the docker container the added false variables.

@gelbuilding Hey, after communication with our engineer, it is certain that the Docker network will behave normally. The misunderstanding may result from the desynchrony between the backend and the GUI.

The key point is that:

  1. You can create a named bridge network and use it for multiple containers.
  2. And you have to reboot the OS after you create the network to make the GUI get the created bridge network so that you can use it in the Apps’ settings panel.

You can do the test easily:

# Switch to root
sudo -i

# 1.1 Create two networks
docker network create net-a
docker network create net-b

# 1.2 Run three containers(use Alpine image since it has ping tool)
docker run -d --name box1 --network net-a alpine sleep 3600
docker run -d --name box2 --network net-a alpine sleep 3600
docker run -d --name box3 --network net-b alpine sleep 3600

# 2.1 check box2 IP (the same bridge)
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' box2
# assume 172.18.0.3

# 2.2 check box3 IP (not the same bridge)
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' box3
# assume 172.19.0.2

# 3.1 ping box2 IP (in the same bridge)
docker exec box1 ping -c 2 <box2_IP>
# or ping the contrainer_name (suggesting)
docker exec box1 ping -c 2 box2

# 3.2 ping box3 IP (not the same bridge)
docker exec box1 ping -c 2 <box3_IP>

Remember to reboot after the creation of the network so that the GUI can get the created network(s).

# reboot 
reboot

You can also check this in the dashboard UI.

Hope this is helpful.

1 Like

Thanks Giorgio, that makes sense now. The missing piece on my side was the required reboot after creating the network. Without it, the ZimaOS GUI was out of sync with Docker, so it looked like the network was being ignored. After rebooting, name resolution and container-to-container communication on the custom bridge network are now working correctly.

Just to clarify, in a standard Docker environment on Linux a reboot is not normally required, the network works immediately. In this case, the reboot was only needed to resynchronise the ZimaOS UI with the Docker backend.

I also checked the official ZimaOS > Dev > Networking documentation, and this reboot requirement is not currently mentioned there, which is likely why the behaviour was so confusing.

1 Like

Understood. The reboot is simply to allow the Web UI to retrieve the latest Docker information. Restarting might not be the fastest/best approach; there may be better ways.

Fortunately, advanced users can customize Docker network settings. Cheers.

1 Like

@Zima-Giorgio Thanks, appreciate the clarification.
I agree a reboot isn’t really the best/cleanest solution long term, and I totally understand advanced users can customise Docker directly.

Just to add one last important detail I noticed during testing:

When selecting a manually-created network like net-a, ZimaOS shows the error:

“network net-a was found but has incorrect label com.docker.compose.network set to ‘’”

This suggests ZimaOS is validating networks based on docker-compose metadata, even when the network is created via standard Docker CLI.

In normal Docker behaviour, docker network create does not add compose labels, so the network is still perfectly valid, but ZimaOS treats it as “incorrect” or inconsistent, which causes:

• Apps to not fully start
• Networks to behave as “invalid” in the UI
• Confusion compared to standard Docker installations

So in short:

  • Reboot: fixes UI desync
  • But label validation: still blocks full compatibility

It might be worth either:
• Allowing standard Docker networks without compose labels
• Or documenting that ZimaOS expects compose-created networks

Either way, just wanted to pass that extra detail on as it seems key to the root cause.


Screenshot after app creation + reboot (net-a / net-b now visible in ZimaOS UI):

This shows the custom networks appearing only after:

  1. Creating a ZimaOS-managed app
  2. Rebooting the OS

It also aligns with the earlier error indicating a com.docker.compose.network label inconsistency when selecting the network.

The Problem is that pinging the IP is not the problem. The problem is the hostname that is not set correctly and you can’t ping the container by it’s name (hostname). IP ping always worked !

After I create the network manually under CLI and before the reboot,

  1. I can ping through the name. (See the screenshots.)

  2. And the App run/created through docker run... command will show up on the dashboard as the ‘Legacy app’.

  3. The new created networks won’t show up in the Apps settings panel.

Then I reboot.

  1. The networks show up in the Apps settings panel.

  1. But I can’t choose one to replace the App’s current network. This is also tested by @gelbuilding .

I think this may be an issue. I will invite our engineer to look at this.

Then, I import the legacy Apps

  1. You can do this by clicking their icons at the bottom of the dashboard.

  1. I find the container names are modified.

(I use boxa1, boxa2, and boxa3 at the beginning of this test.)

  1. I can ping again using the new containers’ names.

I reboot again after the import of the legacy Apps, which are run by CLI.

  1. And in the settings panel, I still can not select the created networks.

  2. I can ping boxa1-boxa1-1 in boxa2-boxa2-1.

The Last

I think that the created network can not be chosen may be an issue. I will forward this possible issue to the team.

Thank you all for the feedback.

2 Likes