exabrial a day ago

Laugh, but this probably does have some real world applications for Live Audio.

Digital Live audio mixing is taking over, but it suffers one flaw compared to analog: Latency. Humans can adjust pretty easily to performing an action and hearing a delayed response (that's pretty natural in our daily lives, basically think of it as echolocation). This is sort of like standing farther from a guitar amplifier (sound travels roughly 1 ms per foot). However, singers have it the worst: there is 0 latency from their voice to the ear canal, so monitor systems try to use analog as much as possible.

For digital audio links, every time you join then end-to-end or decode them, you get a bit of latency added.

There are a few audio interconnects that run on Ethernet's OSI Layer 0 (physical medium)

* AES50 is standardized, basically you can think of it as the 100Base-T of digital live audio. It's synchronously clocked with a predictable latency; with roughly ~62us per link. Pretty nice. Cat5e cables are dirt cheap and musicians are destructive as feral cats, so it it's a pretty good solution. Max length is 100meters.

* AudioDante is also popular but actually relies on IP Layer 3, so latency is variable. Typical values are 1ms - 10ms. Max length is pretty much unlimited, with a lot of asterisks.

FTA: 11us is _unbelievably good_ digital latency, but with near unlimited length is actually a pretty good value proposition for Live Audio. There may be a niche demand for a product like this: slap in some SFP adapters, transmit a channel of digital audio over whatever medium you like.

  • philjohn 7 hours ago

    Although when designing audio solutions for large venues, the further back a speaker stack is, the more you'll likely want to add a delay to it so that the sound hits at the same time as the sound from speakers closer to the stage - otherwise it can sound awful (like a strange echo): https://www.prosoundweb.com/why-wait-the-where-how-why-of-de...

    So yes, for monitoring, or linking two far away places with near zero latency audio, but not for connecting speaker stacks in a venue :)

  • lflux 21 hours ago

    Things have probably changed since I last talked to my friends at a large state radio/tv broadcaster, but for long haul they used either MADI over fibre, or AES50 into boxes from NetInsight along with SDI for the video feeds. This works so well that you can put the input/output converters in a venue hosting a live music and do the program audio mix in a control room at broadcast HQ 100s of kilometers away.

    • amluto 18 hours ago

      At 100s of km, you’d be pushing the limits for actual live sound, though. 100km is about a light-millisecond, and ordinary fiber is rather slower than light, so that’s maybe 3ms round trip per 100km. If a musician can hear themselves through monitors at too much more latency than that, it could start to get distracting.

      • lflux 6 hours ago

        As i understand it, the sound for audience in the venue and monitors for artists was run locally by separate mixer. The audio backhauled to HQ was for the live broadcast.

      • kijiki 18 hours ago

        If the monitors are 3ft away from the musician, they're already looking at 3ms of latency just in the air between the monitor and their ear.

        • _factor 16 hours ago

          This is why you see headphones used in recording studios I’m sure.

          • InitialLastName 5 hours ago

            You see headphones used in recording studios because ambient sound (i.e. from a loudspeaker) has a habit of getting picked up by microphones.

      • mrb 5 hours ago

        Latency is 1ms for a round-trip through 100km of fiber (200km total).

  • miki123211 13 hours ago

    I've recently been reading about T1 and E1 cables, which were used to transmit most calls inside and between telecom companies back in the day, and I was astonished that they transmitted data one sample at a time.

    Unlike IP, those were synchronous, circuit-switched systems. You'd first use a signaling protocol (mostly SS7) to establish a call, reserving a particular timeslot on a particular link for it, and you'd then have an opportunity to transmit 8 bits of data on that timeslot 8000 times a second. There was no need for packet headers, as the timeslot you were transmitting on was enough to identify which call the byte belonged to.

    Because all data from a single call always took the same path, and everything was very tightly synchronized, there was also no variability in latency.

    This basically eliminated any need for buffers, which are the main cause of latency in digital systems.

    • toast0 4 hours ago

      > This basically eliminated any need for buffers, which are the main cause of latency in digital systems.

      You still need a buffer at each switching point, because the timeslots on each cable aren't likely to line up. But the buffer for each channel only needs to be 2 samples wide in the worst case where the timeslots overlap and you need to send from the buffer while receiving into the buffer.

      Given the timeframe when T1/E1 were developed, a more accurate perspective is not that buffers were eliminated, it's that they were never created.

    • rasz 8 hours ago

      Didnt GSM(2G) work same way with dedicated regular timeslots per call? I dont know about 3G, but 4G finally introduced and 5G cemented packetized voice data with Volte.

      • joha4270 7 hours ago

        The interesting point wasn't the timeslots, but their size.

        Yes, 2G has fixed time slots, but a slot is used for a lot longer than a single (half?) sample.

        • miki123211 an hour ago

          2g (and all other standards after it) use 20-millisecond frames.

          It needs to send 8KHz audio at much lower bitrates (~14Kbps instead of 64Kbps), and you can't do that with raw PCM if you want decent quality. This means you need lossy compression and a codec, and those need far more than a single sample to work well.

          CDMA was similar, not sure what their frame size was exactly, but it was somewhere in the vicinity.

  • toast0 a day ago

    > FTA: 11us is _unbelievably good_ digital latency, but with near unlimited length is actually a pretty good value proposition for Live Audio. There may be a niche demand for a product like this: slap in some SFP adapters, transmit a channel of digital audio over whatever medium you like.

    Used to be you could get an PRI (ISDN/T1) phone line for this kind of work, but I think it's pretty doubtful that you can keep it end-to-end low latency PRI with modern telephony. You'd have to be ok with single channel 8-bit, 8k uLaw, but that's not that bad; you could probably orchestrate multiple calls for multiple channels. Someone is going to convert it to SIP with 20ms packets and there goes your latency.

  • lukeh 16 hours ago

    Dante network latency can go as low as 125us.

    • exabrial 4 hours ago

      Is there a mode I'm unaware of? I've never had Dante latency that low, let alone that predictable. 1ms-2ms is average with occasional spikes in my experience, and the more complex the network setup the worse it gets.

    • chgs 12 hours ago

      That in aes67 mode?

      I don’t dabble much in low latency audio but from what I remember Dante tended to be about 1ms?

      • lukeh 12 hours ago

        AES67 mode is unfortunately limited to 1ms or higher.

vluft a day ago

On a related note, the excellent DIY Perks youtube channel recently replaced toslink leds with lasers to do a wireless surround system https://www.youtube.com/watch?v=1H4FuNAByUs

  • dylan604 a day ago

    What happens when your sub starts kicking so hard that your walls start to vibrate causing the line of sight to go intermittent?

    • ragebol a day ago

      Then the audio drops out, so it's a self-correcting problem!

      Also, the beam is a bit divergent, even if it vibrates the beam could still cover the sensor.

      • kridsdale3 3 hours ago

        What an excellent natural interpretation of "DROP THE BASS"

      • dylan604 a day ago

        Not necessarily. The sub is not usually attached to a wall, so it wouldn't self correct like you're suggesting

        • chowells a day ago

          I think you missed a joke there.

          Loss of signal -> silence -> no vibrations -> signal resumption.

          • dylan604 a day ago

            no, you're missing the point. the subwoofer is not connect to a wall that vibrates, so it wouldn't miss the signal. the surround speakers and possibly the front and surround speakers tend to be attached to a wall. The floor doesn't shake enough for the sub to loose alignment is the point.

            • ragebol 14 hours ago

              I was making a joke though.

              Also, if you bounce the signal off a mirror on the wall like DIY Perks did, then walls vibrating even a little bit will be an issue if the beam is narrow enough.

            • simoncion 9 hours ago

              Well, (to treat this seriously, rather than the joke it was) where's your transmitter? And are there vibration-sensitive components inside of either the transmitter or receiver? Several times a month, cars idle outside my apartment with bass loud enough to severely shake my windows, and somewhat shake my walls and floor. I imagine a receiver that's physically attached (or merely very near) to a subwoofer that loud would have trouble maintaining a steady optical link.

  • actionfromafar a day ago

    Next step, point a TOSLINK laser at the Moon Retroreflectors!

    • dylan604 a day ago

      There was something posted not too long ago that bounced radio signals off of the moon that they then turned into an audio filter based on their testing on what it would do to the signal.

      • jrockway 20 hours ago

        https://en.wikipedia.org/wiki/Earth%E2%80%93Moon%E2%80%93Ear...

        I like the example audio file they have for the article, because the QSO ends with "73, bye bye" and that bounces off the moon and is received by the sender a little bit later. The moon is far away!

        (I also really enjoy the distortion to SSB signals that you get by tuning the "carrier" frequency slightly wrong; more likely in this case because the moon changes the frequency of the reflected signal due to the doppler effect. Also happens with satellite comms, though you might not notice if you're using FM and not SSB.)

    • mey a day ago

      The dark side of the moon on continuous loop would be an interesting project.

  • skerit 4 hours ago

    A part of me wants to use his idea to set up some kind of wireless data connection just for fun.

  • pseudosavant a day ago

    Such a great video. There is a really good chance I use that technique for a remote subwoofer at some point. Really elegant solution.

  • gorkish a day ago

    The problem with DIY perks solution is that the manchester clock+data encoding is an amplitude modulated thing and isnt really very robust to using in free space. LED bulbs, sunlight, or all manner of other stuff can and will fuss with it. This is probably why he ended up having to go with lasers instead of just a big IR blaster against the ceiling. If he modulated the OOK signal onto some kind of carrier the entire thing would be a lot more reliable and as a bonus could probably ditch the lasers. This is more or less how the infrared wireless speakers and headphones of yore (80's and 90's) did the job.

    • Neywiny a day ago

      So the problem with his solution is that he needed a solution to solve a problem?

    • amluto 18 hours ago

      If you mean a literal “IR blaster”, those generally modulate onto a 38kHz carrier. (I built an IR blasting device out of a 555 timer and an LED once, and it worked great, and no, I did not use precision resistors or capacitors. I admit I’m not actually sure whether a standard IR blaster contains a modulator or whether the device supplying the signal is expected to pre-modulate it.). You’re not going to get anything resembling acceptable audio quantity over consumer IR tech.

pclmulqdq a day ago

Large-scale audio systems will often use synchronous Ethernet or other similar protocols instead of things like TOSLINK at this point.

Also, a general solution to "send low-bandwidth over an SFP" is to use FM or phase modulation to carry the signal on top of a carrier wave that is fast enough for the retimers in question. Buffer and retimer chips will not respect amplitude in a modulation system, but they will largely preserve frequency and phase.

  • rdtsc a day ago

    Indeed. I worked with CobraNet for some years. I kind like their isochronous protocol. But being a layer 2 protocol I believe it's outdated at this point.

    Also greetings, again (I believe?) from a fellow assembly username HNer!

  • iancmceachern a day ago

    Yeah, there is a whole standard for it

    https://en.m.wikipedia.org/wiki/Audio_over_Ethernet

    This is what most professional places have

    • dekhn a day ago

      I had a dream many years ago where I could connect all my house devices; all the TVs, stereos, etc, all to one ether network (ideally the same physical network as my switched Internet ports) and send AV from any source to any dest without having to worry that much about formats or bandwidth limits.

      It never really happened and each company came up with their own bespoke solution, seemingly with "mobile phone-first" philosophy.

      • iancmceachern a day ago

        They have this too, it's how thise fancy systems in rich peoples mansions work and fancy board rooms. A famous company in that world is Crestron. They make stuff that let's you do this, control everything from one central system.

        The protocol for the video is GigE vision. It's how many fancy broadcast, CCD security, and fancy home theater/office setups work

crote a day ago

I'm surprised it works this well!

A while ago I looked into this for a similar-ish hobby project, and the main dealbreaker seemed to be the mandatory AC coupling capacitors: they are intended to block DC currents, so a signal which is substantially slower than intended is essentially fighting a high-pass filter. This is also why there are special AV SPF transceivers: Unlike Ethernet, SDI suffers from "pathological patterns" consisting of extremely long runs of 1s or 0s, which can cause "DC wander" [0]. SDI transceivers need to take this (albeit extremely unlikely) possibility into account, or risk losing signal lock.

For this reason I pretty much gave up on the idea of reliably going sub-100Mbps on cheap and easily available 1G / 10G SFP modules. Seeing it (mostly) work for TOSLINK at 3Mbps is beyond my wildest expectations - I bet the LVDS driver's high slew rate is doing quite a bit of work here too.

[0]: https://www.ti.com/lit/an/snaa417/snaa417.pdf

  • MrRadar a day ago

    The article mentions S/PDIF (which TOSLINK is an optical version of) uses Manchester code[1] which eliminates the DC component by ensuring every bit has at least one transistion of the signal between high and low.

    [1] https://en.wikipedia.org/wiki/Manchester_code

    • crote a day ago

      The problem is the speed. S/PDIF doesn't have a DC component at the S/PDIF bit rate, but to an SFP+ transceiver that S/PDIF signal is a lot closer to DC than to its expected signal. A single S/PDIF bit viewed as if it were a 10Gbps signal looks like thousands of 1s followed by thousands of 0s. Yes, they all balance out in the end, but you can still develop quite a large drift within a single sub-S/PDIF-bit sequence.

      A thought experiment to clarify it: let's say you are hoisting a bucket with a DC motor. You're feeding it with a 50Hz AC power source. It's obviously not going anywhere, because it's just oscillating rapidly. You'd need for the motor to run in a single direction for a few minutes to actually lift the bucket. Now drive it with a 0.0000001Hz AC power source (which starts at peak voltage). The motor is going to reverse after 58 days, but does that actually matter? For any practical purposes, how is it different from a DC power source?

      • crest 21 hours ago

        That's why you get problems around 10Gbps, but simple 10Gbps optics and afaik all 1Gbps or slower optics don't use the "fancy" kind of signal processing because it wasn't needed. Their lower cut-off frequency should be around 100kHz.

      • nomel 21 hours ago

        Does SFP+ not have a scrambler/descrambler to make this a non issue, like almost all other phy?

        https://en.m.wikipedia.org/wiki/Scrambler

        • jrockway 20 hours ago

          This is done before the SFP+ module sees the signal, but the module makes the design assumption that it is being done. It is right for 10G ethernet, it is wrong (at a certain time scale) for SPDIF.

          I also think that https://en.wikipedia.org/wiki/Line_code is the term you're looking for.

      • MrRadar a day ago

        Thanks for the explanation!

    • teraflop a day ago

      Yup, but that only works if those transitions happen frequently enough compared to the time constant of the high-pass filter. Presumably, that's why the author found that the optics only worked with signals above about 150kHz.

  • michaelt 12 hours ago

    I can understand DC wander being a problem on copper ethernet, where the signal goes through an isolation transformer - which is there specifically to block DC; you don't want to accidentally make a ground loop between buildings after all.

    But presumably an optical SFP doesn't need to block DC, because you can't make a ground loop over optical fibre?

glitchc a day ago

Once you replace the TOSLINK transmitter with an SFP module, it's not the TOSLINK tx/rx that's being tested but rather the low-bandwidth S/PDIF protocol operating over a high bandwidth SFP link. So it's not really TOSLINK that's being extended but rather S/PDIF over optical fibre. Maybe I'm missing something....

  • toast0 a day ago

    TOSLINK is S/PDIF over (usually plastic) optical fiber. S/PDIF over SFP is S/PDIF over optical fiber too, unless you're using SFP DACs.

myself248 a day ago

Fiber techs have "talk sets" which are just little voice intercoms that you plug into an unused fiber in the bundle, so you can yammer back and forth between manholes/closets/whatever. I'm not sure whether they're even digital; it's been a while since I played with a pair.

  • mrguyorama a day ago

    How do you non-destructively jack into a glass fiber? Or are they limited to hooking into transceivers on the ends?

    • myself248 a day ago

      You're correct that the talk-sets have to plug into the ends.

      However, there's directional indicators that just clamp onto the middle of a fiber. They bend it a little and sample the light that leaks out of the bend, without interrupting payload traffic. The first one I used back in the day was an Exfo but there are tons of 'em now.

      As far as I know, these are receive-only, though physics doesn't seem to prohibit launching light into the fiber this way, it would just be an extremely inefficient process.

      There isn't enough light leaking out to reconstruct the whole high-bit-rate signal (as far as I know), but there's enough to tell whether the light is flowing one way or the other, or both. And there's enough to tell whether it's modulated with a low frequency signal -- most optical test sets can generate a simple "tone", typically 270 Hz, 330 Hz, 1 kHz, or 2 kHz, and the clamp testers can tell if and which tone is present.

    • 0_____0 a day ago

      My guess is it's already-terminated dark fiber with an FC connector (no transceiver)

      Found an example here. https://www.fiberinstrumentsales.com/fis-singlemode-multimod...

      You can't really "get into" an optical fiber mid-run without splicing. Splicing isn't really that hard (I've done it! Fusion splicers are little robotic wonders. Most of the work is in the prep, not the splice itself.)

    • toast0 a day ago

      You're probably in the manhole to work on a fiber break anyway...

      • chgs 12 hours ago

        And hopefully not break all the other fibres while doing it.

        Of course that’s why we get so concerned about pinch points with dual fibres

brudgers a day ago

Recently, I described Toslink in an internet conversation...the other person expected it to be like USB. It is pretty amazing how old this technology is and how little anyone complains about it.

There just aren't Toslink horror stories floating around the popular internet (SPDIF is another WTF-a-75Ω-RCA-cable? story). Toslink is a technology that just works (and the normal limit is a generous 10m)

theandrewbailey a day ago

> TOSLINK/SPDIF turns this into a manchester coded serial signal, at around 1.5Mbps that is much more resiliant to analog interference

When I was connecting my surround sound receiver to my PC, I was bummed that SPDIF standard was never improved to support 5.1 or 7.1 uncompressed surround sound. 5.1 DTS compression is the best it can do (due to the 1.5 mbps bandwidth), but PC support is rather limited. I gave up, and I've been using it with HDMI for 10 years. Running it through my video card/drivers has introduced (bearable) complexity, but I wonder why receivers to this day can't connect to PCs over USB instead. (Yes, most receivers have USB ports, but those are for playing MP3s off a flash drive. A PC isn't a flash drive.)

  • toast0 a day ago

    I think the root of the problem is lack of bidirectional signalling means you have to manually configure for capabilities on both sides (which actually already happens for DTS/Dolbly over SPDIF, so it wouldn't have been the end of the world...). Lack of bidirectional signalling also precludes content protection that's more effective than setting a "don't pirate" flag, which might be the real reason.

  • bar000n 8 hours ago

    i doubt about the 1.5mbps limit as many DAC specs toslink as 24bit 96khz pcm stereo capable, which sums up to almost 5mbps

  • dylan604 a day ago

    > A PC isn't a flash drive

    That could be a kind of cool app that would allow you to present a folder on your PC as a media device. However that would then require a dreaded USB-A to USB-A type of cable <shudder>

    • akovaski a day ago

      You can do this (in Linux, at least. Mobile devices like Android as well.) if the USB port of the peripheral side is a USB OTG port. I've only seen USB OTG ports as USB-B (standard and micro) or USB-C.

      Edit: I didn't notice before, but USB OTG is on the front page right now https://news.ycombinator.com/item?id=42585167

      • nsteel 12 hours ago

        I think there was a RPi Zero project doing the round some years back that made use of this.

    • EvanAnderson a day ago

      Target disk mode on a lot of older Mac machines did that over Firewire. You could boot the machine into target disk mode and it would present its mass storage over Firewire. It was pretty cool.

      • UniverseHacker a day ago

        I loved that feature- I could take my shitty old laptop into a university computer lab and boot a powerful brand new mac with fast internet from my hard drive- and use all of my software as if it was my own computer.

      • dylan604 a day ago

        But you couldn't use the machine at the same time. This would be like a SAMBA share, but over USB

        • zokier a day ago

          You can connect two computers with usb and setup network between them, so you can just use smb/cifs. Microsoft has even handy tutorial for that: https://learn.microsoft.com/en-us/windows-hardware/design/co...

          • dylan604 a day ago

            again this is not the same thing has allowing a USB cable to connect from a PC to another device that is expecting a device that would present itself as a mass storage device

    • ianburrell 18 hours ago

      It could be done with USB-C. The computers would need to figure out which is the computer and the USB host, and which is the drive and act like USB device.

      This is called gadget mode. I don't know what PCs can do it, but Raspberry Pi can do it.

martinmunk 7 hours ago

I did basically this exact same thing at work a few years ago.

For time-correlating audio measurements around the office buildings I needed a analog reference signal in sync.

So I drew up a PCB design with a toslink in/out connector, and a connector for a SFP module and just a lvds driver in between. It worked straight away (more luck than skill) I could then re-use network fibers already run around the basement, and convert it to analog in the MDF rooms of each building, and run the analog signal up to the 3rd floor through existing RJ45 cables.

fru654 a day ago

I wonder if something like this is possible with HDMI? Separate 10G SFP+ for each color channel, one more for i2c, create a similar style breakout PCB, maybe add an MPO or CWDM mux… Could be a fun project. Optical HDMI cables are expensive and most of the time come with a preexisting cable which is hard to route (in conduits) due to HDMI connector size.

  • crote a day ago

    Such products are already commercially available [0][1]!

    DIYing it is probably too painful to be doable. You won't be able to source any kind of protocol translation chip, so you'll have to send it essentially raw into quad SFP+ transceivers. Running 4+ fibers instead of the required 2 (or even 1) is very expensive, and any kind of WDM immediately blows up your budget. Unless you're getting the stuff for free from a DC renovation or something, it's just not worth it.

    On top of that you also have to deal with designing board for extremely fast signals, which is pretty much impossible to debug without spending "very nice car" amounts of money on tooling. People have done it before, but I definitely don't envy them.

    [0]: https://www.startech.com/en-us/audio-video-products/st121hd2...

    [1]: https://www.blackmagicdesign.com/products/miniconverters/tec...

    • toast0 a day ago

      If you need 4x channels, it sounds like a job for QSFP? HDMI is already differential signalling, so you don't need to do that, but you might still need level shifting.

      Probably a box on the source end to manage DDC and strip HDCP.

    • raron a day ago

      > You won't be able to source any kind of protocol translation chip

      I think many of those chips are simple off-the-shelf parts. Probably you would need special licenses only to decode HDCP.

      If you have an FPGA, you could even create valid Ethernet frames and send the data / video stream over any standard switch / media converter as long as you have enough bandwidth and no packet loss. (10G would be enough for FullHD and 25G for 4K if you make it a bit smarter and can strip the blanking interval.)

    • Doohickey-d a day ago

      There's even cheaper versions of this now, "fiber" HDMI cables with the electronics in the HDMI plugs themselves, no additional power required. They go up to 100m length. I do wonder how these work, since I've never seen a good teardown of one.

    • 15155 10 hours ago

      > You won't be able to source any kind of protocol translation chip

      This is called an FPGA.

  • wolrah a day ago

    I have wondered about the same (and/or DisplayPort) but with QSFP optics to simplify dealing with the four channels of data.

    "Classic" DVI-derived HDMI would probably be trickier because of variable clock speeds and additional data but modern HDMI 2.1 is pretty similar to DisplayPort in that it uses four lanes at fixed rates and sends data as packets over those.

    I would love to be able to use standard widely available fiber patch cables for long distance video runs rather than needing proprietary cables only offered in fixed lengths and equipped with enormous connectors that are not friendly to conduit.

    Also these days data rates are getting high enough that even normal lengths are problematic, DisplayPort just recently announced that 3 meter cables will need active components for the full 80 gigabit per second mode, which means that a computer on the floor connecting to a monitor on a standing desk will not be guaranteed to work with passive cables. HDMI also recently announced version 2.2 with a bump from 48 to 96 gigabits per second so they'll presumably be in the same boat.

  • somat a day ago

    My plan, if I ever need long haul(>3meters) video or audio links, is to get the signal into ethernet(or even better ip) and use common network equipment to transport it.

    The theory being ethernet is such a well developed, easy to source common jelly-bean part that this would trump any gains that specialized transports might otherwise have.

    But this is probably just my inner network engineer being disdainful over unfamiliar transport layers.

    • myself248 a day ago

      Nah, this is totally the reasonable way to do it, iff you can tolerate the compression loss or whatever. Because 4k60 is like 12Gbps uncompressed, and even more after you cram ethernet headers onto everything. So most such devices include some compression, and the really expensive ones let you configure how much.

      Failing that, you're probably doing SDI over your own lambda.

    • gh02t 18 hours ago

      It's much cheaper to just buy an optical HDMI cable if you need a long point to point run, it's like 50 bucks for 100 ft. The cool stuff you can do with HDMI over IP lies in switching the signal to different endpoints on demand and things like multicast to multiple receivers, both of which are things you can do with off the shelf HDMI over IP gear.

    • zokier a day ago

      That is happening in the pro world, check out e.g. SMPTE ST 2110.

  • psophis a day ago

    Not HDMI, but SDI over fiber is basically this. It can be muxed and is used in the broadcast industry for long haul camera feeds.

    • chgs 12 hours ago

      SDI over fibre with a cheap converter if you need to push multi hindered metres. Then people moves to 222-6 which packetised the SDi over IP, and now 2110 which breaks out the SDI to its components.

      For most long haul links people still compress, good old h264 or 265 with latencies in the 200-500ms range (plus network propagation), or J2k/JXS and NDI which are more like 50-150ms. Ultimately 200mbit of h265 is far cheaper to transmit than 10ish gbit of 2110, and in many cases the extra 500ms doesn’t matter.

jrockway 20 hours ago

I fear that I'm about to be nerd sniped because I really want to try to make this work as a "proper" 10G signal.

I feel that if you over sample the SPDIF signal and line code it to not have a DC bias, and do the opposite on the receive end, it would work. That is maybe too much transformation to be interesting, however. So I wonder what happens if you sample the signal at the 10G ethernet sample rate like a 1 bit ADC does, transmit that, and smooth the too-high-frequency result with a capacitor?

I am very worried that I may end up trying this ;)

PaulHoule a day ago

I think it's amusing that optic fiber connectors have had so little success in the market though I have a few TOSLINK and the coaxial equivalent in my upstairs home theater (I have a Sony 300 disc CD changed packed with DTS 5.1 Music Discs so I'm living the surround music dream) and downstairs (computer to stereo, computer to minidisc recorder, etc.)

I recently got a cable to hook up a Meta Quest 3 to a PC for PCVR. My understanding is that works like a high-spec USB 3 cable but has an optic fiber in it for the data so it can be really long.

  • synchrone a day ago

    I tore down oculus link cable - it's just copper internally.

    Also oculus works fine over the "charging" type c cable + type-c to type-a + a classic copper usb3.0 extender of another 1.8 meters.

    • PaulHoule a day ago

      I use it for other things and it performs admirably. (In particularly my Sony camera has trouble with cheap cables) It is one of two "elite" USB-C cables I keep near my computer, the other one is the shorter cable that came with the Looking Glass Go.

  • TacticalCoder a day ago

    > I think it's amusing that optic fiber connectors have had so little success in the market though I have a few TOSLINK and the coaxial equivalent in my upstairs home theater ...

    Is TOSLINK that unsuccessful? I was already using TOSLINK a very long time ago (in the nineties) and I'm still using TOSLINK today. Enter any audio store and they have TOSLINK cables.

    It's very old by now though and I take it there's better stuff but TOSLINK still does the job.

    My "music" path doesn't use TOSLINK:

        source, eg QOBUZ for (lossless) streaming -> ethernet -> integrated amp (which has a DAC) -> speakers
    
    But my "movie" (no home theater anymore atm) path uses TOSLINK:

        TV -> TOSLINK -> same integrated amp -> speakers
    
    For whatever reason that amp is quite new and has all the bells and whistles (ARC and network streaming, for example) yet that amp still comes with not one but two (!) TOSLINK inputs.

    I'd say that's quite a successful life for a standard that came out in the mid eighties.

ahofmann a day ago

This is wonderfully useless, what a great delight to read!

m463 11 hours ago

I thought I read somewhere where someone had somehow jammed a fiber cable into a toslink/spdif port (doing all this optically without sfp)

can't seem to find the article.

(janky in comparison to this article, which is amazing!)

rcarmo 14 hours ago

“Knowing this stuff means that it is possible to build bigger, better, more horrifying solutions/workarounds to problems.”

Hear hear. Great read!

khaki54 a day ago

I love when people do random stuff like this. I couldn't even suss out his reasoning for taking this project on. Normally there is at least a notional but absurd use case. Cool project though, and I'm sure he had fun.

blt a day ago

TOSLink was kind of a silly idea because digital electrical signals would also prevent ground loops. The key is digital vs. analog, not optical vs. electrical.

  • ielillo a day ago

    Ground loops comes from the ground mismatch between two electrically connected devices. When you use an optical link, you isolate those two devices since there is no common ground and the hum goes away. Same if connect a battery device to a grounded device.

no_identd a day ago

…now complete the circle, and run a 56k V.92(*) link over it. 8)

(* important, cuz despite claims to the contrary V.90 ain't at the Shannon limit, but V.92 is — kind of. See https://news.ycombinator.com/item?id=4344349 )

  • no_identd a day ago

    Follow up, quoting from the article:

    >It is tempting to attach a “dialup” modem to both sides, this would probably create the greatest modern day waste of a 100 GHz optical channel, given that it gives a final output bandwidth of ~40 kbit/s, and I assume this would probably confuse an intelligence agency if they were tapping the line.

    Regardless of the fact that 48 kbps seems more likely, I'd really like to know the noise floor & SNR of that link

omer9 a day ago

Light travels 300.000km/h, not 200.000km/h. Or did I overlooked something?

  • halestock a day ago

    It’s about 200,000km/h when traveling through fiber optic cable.

    • rayhaanj a day ago

      I think you meant kilometres per second, not per hour.

  • crote a day ago

    Very simplified: the speed of light isn't constant. The well-known 299.792.458 m/s constant is the speed of light in vacuum - and glass isn't a vacuum. Light goes significantly slower in a lot of mediums, including glass, and it's why things like lenses are possible.

    • somat a day ago

      It is also why high speed trading firms invest in microwave radio links the speed of light through air is enough faster enough than the speed of light through glass that they feel this gives them a trading edge.

      Honestly, gaming the system this hard really worries me, a lot of our economic ability is tied up in these trading system(the stock market). and I can see something going wrong far faster than our ability to fix it.

  • formerly_proven a day ago

    Speed of light in a medium is c/index of refraction, which is about 1.5 for every glass and highly transparent plastic.