random

Page 1
random

Gotcha with redis and IPv6

Just a heads up for those who might run into the same issue:

I have an environment with a redis container named redis. In another container in the same environment, I had a nodejs server trying to connect to the redis server via the ioredis library. Basically I had a file like

import { Redis } from "ioredis";
export const redis = new Redis("redis://redis:6379");

On server start, I was seeing a stream of errors along the lines of

[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
    at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
    at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
    at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)

This was very strange as the two containers were on the same network and redis should have been (and turns out, was) a valid hostname. Furthermore I was able to run via SSH on the node server instance:

# redis-cli -h redis PING
PONG

So the hostname was valid. What was going on?

After some discussion with the cycle team (thanks!!), it turns out the issue is that the internal networking within the environment is all done via IPv6, and by default, the ioredis client doesn't allow DNS resolution into IPv6. For whatever reason 🤷🏻‍♂️. But the fix was very simple. This works:

import { Redis } from "ioredis";
export const redis = new Redis({ host: "redis", family: 6 });

Explicitly tell the client to use IPv6, and it will.

Perhaps this will come in handy for someone later 😄 again, thanks to the cycle team for finding the solution to this!

avatar
3
random

Debian slim-bookworm - intermittent DNS failures

Just a friendly note to the community after talking it through with the support guys - slim-bookworm is having intermittent DNS resolver issues. We've narrowed one of our stack issues down to the actual image itself having issues and wanted to warn the community to save yourself a bit of aggravation. If you were debating moving everything to Alpine; let this be a final kick in that direction.

avatar
1
random

Should I remove this from quarantine?

What's the worst that could happen?

avatar
2
random

DNS Service Throttle

My team recently encountered an issue with the Discovery Service where we received the following error in the console:

[Resolver Throttle] <ip here> has hit the max hit limit (250) and is being throttled.

After investigating, we discovered that our API was sending an excessive number of requests to a third-party service, which triggered the throttle in the Discovery Service. This throttling then impacted other API requests in our environment.

The Cycle team explained that the throttle is in place "to prevent getting banned from lookup services like Google's domain servers or other public nameservers." The throttle limit resets every five minutes.

We’ve since resolved the issue on our end, but I wanted to share this experience in case anyone else encounters a similar problem. Hopefully, this helps someone avoid the same situation.

avatar
1
random

YAML is one of my least favorite things.

I've never liked YAML ... probably for the same reason I've never liked python. Indent-sensitive configs? Gross.

... but I know so many of you love YAML. :(

avatar
platform
3
v2025.04.18.01 © 2024 Petrichor Holdings, Inc.

We use cookies to enhance your experience. You can manage your preferences below.