[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

December 02, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.10 on CRAN: Multiple Enhancements

A new release of the anytime package arrived on CRAN today—the first is well over four years. The package is fairly feature-complete, and code and functionality remain mature and stable, of course.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub repo for a few examples, and the beautiful documentation site for all documentation.

This release slowly matured over four years. It combines a number of strictly internal repository maintenance such as changes to continuous integration with small enhancements (adding for example some new formats, responding better to an error condition, dealing with logical input as an error) with a relaxation of the C++ compilation standard. While we once needed C++11, it is now a constraint as as R itself is quite proactive (the last two releases defaulted already to C++17, suitable compiler permitting) we can now relax this constraint. The documentation site is new, as some other small changes. See the full list of changes which follows.

Changes in anytime version 0.3.10 (2024-12-02)

  • A new documentation site was added.

  • Continuous Integration now uses run.sh from r-ci with bspm

  • Logical input vectors are now recognised as an error (#121)

  • Additional dot-separated format '%Y.%m.%d' is supported

  • Other small updates were made throughout the package

  • No longer set a C++ compilation standard as the default choices by R are sufficient for the package

  • Switch Rcpp include file to Rcpp/Lightest

  • We recommend ~/.R/Makevars compiler flag options -Wno-ignored-attributes -Wno-nonnull -Wno-parentheses

  • The tinytest runner was simplified

  • NA values from conversion now trigger a warning

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo and the documentation site. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 December, 2024 10:01PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

jungle/acid/etc

I thought it had been a full year since I last shared a playlist, but it's been two! I had a plan to produce more, but it seems I haven't. Instead here's a few tracks I've discovered recently which share a common theme.

In August I stumbled across a Sound on Sound video interviewing Pete Cannon, who creates authentically old-school Jungle music using tools and techniques from the time, including AKAI samplers and the Commodore Amiga computer.

Here's three tracks that I found since then. Some 8-bit Amiga-jungle,

by

some slower-paced acid house from someone ostensibly based on Whitley Bay,

by

and a darker piece I heard on the radio.

by

02 December, 2024 10:00PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Graph for my furusato tax.

Graph for my furusato tax. Exceeding 150-man will inevitably exceed 50-man limit for Ichiji-shotoku. So added some rough calculation there. graph.

02 December, 2024 05:39AM by Junichi Uekawa

Russ Allbery

Review: Long Live Evil

Review: Long Live Evil, by Sarah Rees Brennan

Series: Time of Iron #1
Publisher: Orbit
Copyright: July 2024
ISBN: 0-316-56872-4
Format: Kindle
Pages: 433

Long Live Evil is a portal fantasy (or, arguably more precisely, a western take on an isekai villainess fantasy) and the first book of a series. If the author's name sounds familiar, it's possibly because of In Other Lands, which got a bunch of award nominations in 2018, She has also written a lot of other YA fantasy, but this is her first adult epic fantasy novel.

Rae is in the hospital, dying of cancer. Everything about that experience, from the obvious to the collapse of her friendships, absolutely fucking sucks. One of the few bright points is her sister's favorite fantasy series, Time of Iron, which her sister started reading to her during chemo sessions. Rae mostly failed to pay attention until the end of the first book and the rise of the Emperor. She fell in love with the brooding, dangerous anti-hero and devoured the next two books. The first book was still a bit hazy, though, even with the help of a second dramatic reading after she was too sick to read on her own.

This will be important later.

After one of those reading sessions, Rae wakes up to a strange woman in her hospital room who offers her an option. Rather than die a miserable death that bankrupts her family, she can go through a door to Eyam, the world of Time of Iron, and become the character who suits her best. If she can steal the Flower of Life and Death from the imperial greenhouse on the one day a year that it blooms, she will wake up, cured. If not, she will die. Rae of course goes through, and wakes in the body of Lady Rahela, the Beauty Dipped in Blood, the evil stepsister. One of the villains, on the night before she is scheduled to be executed.

Rae's initial panic slowly turns to a desperate glee. She knows all of these characters. She knows how the story will turn out. And she has a healthy body that's not racked with pain. Maybe she's not the heroine, but who cares, the villains are always more interesting anyway. If she's going to be cast as the villain, she's going to play it to the hilt. It's not like any of these characters are real.

Stories in which the protagonists are the villains are not new (Nimona and Hench come to mind just among books I've reviewed), but they are having a moment. Assistant to the Villain by Hannah Nicole Maehrer came out last year, and this book and Django Wexler's How to Become the Dark Lord and Die Trying both came out this year. This batch of villain books all take different angles on the idea, but they lean heavily on humor. In Long Live Evil, that takes the form of Rae's giddy embrace of villainous scheming, flouncing, and blatant plot manipulation, along with her running commentary on the various characters and their in-story fates.

The setup here is great. Rae is not only aware that she's in a story, she knows it's full of cliches and tropes. Some of them she loves, some of them she thinks are ridiculous, and she isn't shy about expressing both of those opinions. Rae is a naturally dramatic person, and it doesn't take her long to lean into the opportunities for making dramatic monologues and villainous quips, most of which involve modern language and pop culture references that the story characters find baffling and disconcerting.

Unfortunately, the base Time of Iron story is, well, bad. It's absurd grimdark epic fantasy with paper-thin characters and angst as a central character trait. This is clearly intentional for both in-story and structural reasons. Rae enjoys it precisely because it's full of blood and battles and over-the-top brooding, malevolent anti-heroes, and Rae's sister likes the impossibly pure heroes who suffer horrible fates while refusing to compromise their ideals. Rae is also about to turn the story on its head and start smashing its structure to try to get herself into position to steal the Flower of Life and Death, and the story has to have a simple enough structure that it doesn't get horribly confusing once smashed. But the original story is such a grimdark parody, and so not my style of fantasy, that I struggled with it at the start of the book.

This does get better eventually, as Rae introduces more and more complications and discovers some surprising things about the other characters. There are several delightful twists concerning the impossibly pure heroine of the original story that I will not spoil but that I thought retroactively made the story far more interesting. But that leads to the other problem: Rae is both not very good at scheming, and is flippant and dismissive of the characters around her. These are both realistic; Rae is a young woman with cancer, not some sort of genius mastermind, and her whole frame for interacting with the story is fandom discussions and arguments with her sister. Early in the book, it's rather funny. But as the characters around her start becoming more fleshed out and complex, Rae's inability to take them seriously starts to grate. The grand revelation to Rae that these people have their own independent existence comes so late in the book that it's arguably a spoiler, but it was painfully obvious to everyone except Rae for hundreds of pages before it got through Rae's skull.

Those are my main complaints, but there was a lot about this book that I liked. The Cobra, who starts off as a minor villain in the story, is by far the best character of the book. He's not only more interesting than Rae, he makes everyone else in the book, including Rae, more interesting characters through their interactions. The twists around the putative heroine, Lady Rahela's stepsister, are a bit too long in coming but are an absolute delight. And Key, the palace guard that Rae befriends at the start of the story, is the one place where Rae's character dynamic unquestionably works. Key anchors a lot of Rae's scenes, giving them a sense of emotional heft that Rae herself would otherwise undermine.

The narrator in this book does not stick with Rae. We also get viewpoint chapters from the Cobra, the Last Hope, and Emer, Lady Rahela's maid. The viewpoints from the Time of Iron characters can be a bit eye-roll-inducing at the start because of how deeply they follow the grimdark aesthetic of the original story, but by the middle of the book I was really enjoying the viewpoint shifts. This story benefited immensely from being seen from more angles than Rae's chaotic manipulation. By the end of the book, I was fully invested in the plot line following Cobra and the Last Hope, to the extent that I was a bit disappointed when the story would switch back to Rae.

I'm not sure this was a great book, but it was fun. It's funny in places, but I ended up preferring the heartfelt parts to the funny parts. It is a fascinating merger of gleeful fandom chaos and rather heavy emotional portrayals of both inequality and the experience of terminal illness. Rees Brennan is a stage four cancer survivor and that really shows; there's a depth, nuance, and internal complexity to Rae's reactions to illness, health, and hope that feels very real. It is the kind of book that can give you emotional whiplash; sometimes it doesn't work, but sometimes it does.

One major warning: this book ends on a ridiculous cliffhanger and does not in any sense resolve its main plot arc. I found this annoying, not so much because of the wait for the second volume, but because I thought this book was about the right length for the amount of time I wanted to spend in this world and wish Rees Brennan had found a way to wrap up the story in one book. Instead, it looks like there will be three books. I'm in for at least one more, since the story was steadily getting better towards the end of Long Live Evil, but I hope the narrative arc survives being stretched out across that many words.

This one's hard to classify, since it's humorous fantasy on the cover and in the marketing, and that element is definitely present, but I thought the best parts of the book were when it finally started taking itself seriously. It's metafictional, trope-subverting portal fantasy full of intentional anachronisms that sometimes fall flat and sometimes work brilliantly. I thought the main appeal of it would be watching Rae embrace being a proper villain, but then the apparent side characters stole the show. Recommended, but you may have to be in just the right mood.

Content notes: Cancer, terminal illness, resurrected corpses, wasting disease, lots of fantasy violence and gore, and a general grimdark aesthetic.

Rating: 7 out of 10

02 December, 2024 05:26AM

Charles

Hello World

Or how it took more than a year for me to set up this website -

As the computer science tradition demands, we must start with a Hello World.

Though I have to say this hello world took quite a long time to reach the internet. I’ve been thinking about setting up this website for way over a year, but there are always too many things to decide - what Static Site Generator will I use? Where should I get a domain from? Which registrar would be better now? What if I want to set up a mail server, is it good enough? Oh, and what about the theme, which one to choose? Can I get one simple enough to not fetch javascript or css from external sources?

This was taking so long that even my friends were saying “Please, just share your screen and let’s do it now!”. Well, rejoice friends, now it’s done!

02 December, 2024 02:18AM

December 01, 2024

hackergotchi for Guido Günther

Guido Günther

Free Software Activities November 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.43 release, the initial file chooser portal, phosh-osk-stub now handling digit, number, phone and PIN input purpose via special layouts as well as Phoc mostly catching up with wlroots 0.18 and the current development version targeting 0.19.

phosh

  • When taking a screenshot via keybinding or power button long press save screenshots to clipboard and disk (MR)
  • Robustify Screenshot CI job (MR)
  • Update CI pipeline (MR)
  • Fix notifications banners that aren't tall enough not being shown (MR). Another 4y old bug hopefully out of the way.
  • Add rfkill mock and docs (MR). Useful for HKS testing.
  • Release 0.43~rc1 and 0.43
  • Drop libsoup workaround (MR)
  • Ensure notification only takes its actual height (MR)

phoc

  • Move wlroots 0.18 update forward (MR). Needs a bit more work before we can make it default.
  • Catch up with wlroots development branch (MR) allowing us to test current wlroots again.
  • Some of the above already applies to main so schedule it for 0.44 (MR)

phosh-mobile-settings

  • Don't mark release notes as translatable to save some i18n effort (MR)
  • Release 0.43~rc1 and 0.43.0

libphosh-rs

phosh-osk-stub

  • Add layouts for PIN, number and phone input purpose (MR)
  • Release 0.43~rc1
  • Ensure translation get picked up, various cleanups and release 0.43.0 (MR)
  • Make desktop file match app-id (MR)

phosh-tour

  • Fix typo and reduce number of strings to translate (MR)
  • Add translator comments (MR). This, the above and additional fixes in p-m-s were prompted by i18n feedback from Alexandre Franke, thanks a lot!
  • Release 0.43.0

pfs

  • Initial version of the adaptive file chooser dialog using gtk-rs. See demo.
  • Allow to activate via double click (for non-touch use) (MR)

xdg-desktop-portal-phosh

  • Use pfs to provide a file chooser portal (MR)

meta-phosh

  • Slightly improve point release handling (MR)
  • Improve string freeze announcements and add phosh-tour (MR)

Debian

  • Upload Phosh 0.43.0~rc1 and 0.43.0 (MR, MR, MR, MR, MR, MR, MR, MR, MR, MR, MR)
  • meta-phosh: Add Recommend: for xdg-desktop-portal-phosh (MR)
  • phosh-osk-data got accepted, create repo, brush up packaging and upload to unstable (MR
  • phosh-osk-stub: Recommend data packager (MR
  • Phosh: drop reverts (MR)
  • varnam-schemes: Fix autopkgtest (MR)
  • varnam-schemes: Improve packaging (MR)
  • Prepare govarnam 1.9.1 (MR)

Calls

  • ussd: Set input purpose and switch to AdwDialog (MR, Screenshot)

libcall-ui

  • Drop libhandy leftover (MR)

git-buildpackage

  • Improve docs and cleanup markdown (MR)
  • Mention gbp push in intro (MR)
  • Use application instead of productname entities to improve reading flow (MR)

wlroots

  • Drop mention of wlr_renderer_begin_with_buffer (MR)

python-dbusmock

  • Add mock for gsd-rfkill (MR)

xdg-spec

  • Sync notification categories with the portal spec (MR)
  • Add categories for SMS (MR)
  • Add a pubdate so it's clear the specs aren't stale (MR) (got fixed in a different and better way, thanks Matthias!)

ashpd

  • Allow to set filters in file chooser portal demo (MR)

govarnam

  • Robustify file generation (MR)

varnam-schemes

  • Unbreak tests on non intel/amd architectures (e.g. arm64) (MR)

Reviews

This is not code by me but reviews I did on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!

  • flathub: livi runtime and gst update (MR)
  • phosh: Split linters into their own test suite (MR)
  • phosh; QuickSettings follow-up (MR)
  • phosh: Accent color fixes (MR)
  • phosh: Notification animation (MR)
  • phosh: end-session dialog timeout fix (MR)
  • phosh: search daemon (MR)
  • phosh-ev: Migrate to newer gtk-rs and async_channel (MR)
  • phosh-mobile-settings: Update gmobile (MR)
  • phosh-mobile-settings: Make panel-switcher scrollable (MR)
  • phosh-mobile-settings: i18n comments (MR)
  • gbp doc updates (MR)
  • gbp handle suite names with number prefix (MR)
  • Debian libvirt dependency changes (MR
  • Chatty: misc improvements (MR
  • iio-sensor-proxy: buffer driver without trigger (MR)
  • gbp doc improvements (MR)
  • gbp: More doc improvements (MR)
  • gbp: Clean on failure (MR)
  • gbp: DEP naming consistency (MR)

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

Comments?

Join the Fediverse thread

01 December, 2024 06:55PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in November 2024

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Conferences

I attended MiniDebConf Toulouse 2024, and the MiniDebCamp before it. Most of my time was spent with the Freexian folks working on debusine; Stefano gave a talk about its current status with a live demo (frantically fixed up over the previous couple of days, as is traditional) and with me and others helping to answer questions at the end. I also caught up with some people I haven’t seen in ages, ate a variety of delicious cheeses, and generally had a good time. Many thanks to the organizers and sponsors!

After the conference, Freexian collaborators spent a day and a half doing some planning for next year, and then went for an afternoon visiting the Cité de l’espace.

Rust team

I upgraded these packages to new upstream versions, as part of upgrading pydantic and rpds-py:

  • rust-archery
  • rust-jiter (noticing an upstream test bug in the process)
  • rust-pyo3 (fixing CVE-2024-9979)
  • rust-pyo3-build-config
  • rust-pyo3-ffi
  • rust-pyo3-macros
  • rust-pyo3-macros-backend
  • rust-regex
  • rust-regex-automata
  • rust-regex
  • rust-serde
  • rust-serde-derive
  • rust-serde-json
  • rust-speedate
  • rust-triomphe

Python team

Last month, I mentioned that we still need to work out what to do about the multipart vs. python-multipart name conflict in Debian (#1085728). We eventually managed to come up with an agreed plan; Sandro has uploaded a renamed binary package to experimental, and I’ve begun work on converting reverse-dependencies (asgi-csrf, fastapi, python-curies, and starlette done so far). There’s a bit more still to do, but I expect we can finish it soon.

I fixed problems related to adding Python 3.13 support in:

I fixed some packaging problems that resulted in failures any time we add a new Python version to Debian:

I fixed other build/autopkgtest failures in:

I packaged python-quart-trio, needed for a new upstream version of python-urllib3, and contributed a small packaging tweak upstream.

I backported a twisted fix that caused problems in other packages, including breaking debusine‘s tests.

I disentangled some upstream version confusion in python-catalogue, and upgraded to the current upstream version.

I upgraded these packages to new upstream versions:

Other small fixes

I contributed Incus support to needrestart upstream.

In response to Helmut’s Cross building talk at MiniDebConf Toulouse, I fixed libfilter-perl to support cross-building (5b4c2e10, f9788c27).

I applied a patch to move aliased files from / to /usr in iprutils (#1087733).

I adjusted debconf to use the new /usr/lib/apt/apt-extracttemplates path (#1087523).

I upgraded putty to 0.82.

01 December, 2024 03:00PM by Colin Watson

hackergotchi for Junichi Uekawa

Junichi Uekawa

Lots of travel and back to Tokyo.

Lots of travel and back to Tokyo. Then I got sick. Trying to work on my bass piece, but it's really hard and I am having hard time getting to a reasonable shape. Discussions on Debconf 2026 bid. Hoping it will materialize soon.

01 December, 2024 06:36AM by Junichi Uekawa

Russ Allbery

Review: Unexploded Remnants

Review: Unexploded Remnants, by Elaine Gallagher

Publisher: Tordotcom
Copyright: 2024
ISBN: 1-250-32522-6
Format: Kindle
Pages: 111

Unexploded Remnants is a science fiction adventure novella. The protagonist and world background would support an episodic series, but as of this writing it stands alone. It is Elaine Gallagher's first professional publication.

Alice is the last survivor of Earth: an explorer, information trader, and occasional associate of the Archive. She scouts interesting places, looks for inconsistencies in the stories the galactic civilizations tell themselves, and pokes around ruins for treasure. As this story opens, she finds a supposedly broken computer core in the Alta Sidoie bazaar that is definitely not what the trader thinks it is. Very shortly thereafter, she's being hunted by a clan of dangerous Delosi while trying to decide what to do with a possibly malevolent AI with frightening intrusion abilities.

This is one of those stories where all the individual pieces sounded great, but the way they were assembled didn't click for me. Unusually, I'm not entirely sure why. Often it's the characters, but I liked Alice well enough. The Lewis Carroll allusions were there but not overdone, her computer agent Bugs is a little too much of a Warner Brothers cartoon but still interesting, and the world building has plenty of interesting hooks. I certainly can't complain about the pacing: the plot moves briskly along to a somewhat predictable but still adequate conclusion. The writing is smooth and competent, and the world is memorable enough that I'm still thinking about it.

And yet, I never connected with this story. I think it may be because both Alice and the tight third-person narrator tend towards breezy confidence and matter-of-fact descriptions. Alice does, at times, get scared or angry, but I never felt those emotions. They were just events that were described to me. There wasn't an emotional hook, a place where the character grabbed me, and so it felt like everything was happening at an odd remove. The advantage of this approach is that there are no overwrought emotional meltdowns or brooding angstful protagonists, just an adventure story about a competent and thoughtful character, but I think I wanted a bit more emotional involvement than I got.

The world background is the best part and feels like it could be part of a larger series. The Milky Way is connected by an old, vast, and only partly understood network of teleportation portals, which had cut off Earth for unknown reasons and then just as mysteriously reactivated when Alice, then Andrew, drunkenly poked at a standing stone while muttering an old prayer in Gaelic. The Archive spent a year sorting out her intellectual diseases (capitalism was particularly alarming) and giving her a fresh start with a new body. Humanity subsequently destroyed itself in a paroxysm of reactionary violence, leaving Alice a free agent, one of a kind in a galaxy of dizzying variety and forgotten history.

Gallagher makes great use of the weirdness of the portal network to create a Star Wars style of universe: the focus is more on the diversity of the planets and alien species than on a coherent unifying structure. The settings of this book are not prone to Planet of the Hats problems. They instead have the contrasts that one would get if one dropped portals near current or former Earth population centers and then took a random walk through them (or, in other words, what playing GeoGuessr on a world map feels like). I liked this effect, but I have to admit that it also added to that sense of sliding off the surface of the story. The place descriptions were great bits of atmosphere, but I never cared about them. There isn't enough emotional coherence to make them memorable.

One of the more notable quirks of this story is the description of ideologies and prejudices as viral memes that can be cataloged, cured, and deployed like weapons. This is a theme of the world-building as well: this society, or at least the Archive-affiliated parts of it, classifies some patterns of thought as potentially dangerous but treatable contagious diseases. I'm not going to object too much to this as a bit of background and characterization in a fairly short novella stuffed with a lot of other world-building and plot, but there's was something about treating ethical systems like diseases that bugged me in much the same way that medicalization of neurodiversity bugs me. I think some people will find that sense of moral clarity relaxing and others will find it vaguely irritating, and I seem to have ended up in the second group.

Overall, I would classify this as an interesting not-quite-success. It felt like a side story in a larger universe, like a story that would work better if I already knew Alice from other novels and had an established emotional connection with her. As is, I would not really recommend it, but there are enough good pieces here that I would be interested to see what Gallagher does next.

Rating: 6 out of 10

01 December, 2024 03:10AM

hackergotchi for Sandro Knauß

Sandro Knauß

QML Dependency tracking in Debian

Tracking library dependencies work in Debian to resolve from symbols usage to a library and add this to the list of dependencies. That is working for years now. The KDE community nowadays create more and more QML based applications. Unfortunately QML is a interpreted language, this means missing QML dependencies will only be an issue at runtime.

To fix this I created dh_qmldeps, that searches for QML dependencies at build time and will fail if it can't resolve the QML dependency.

Me didn't create an own QML interpreter, just using qmlimportscanner behind the scenes and process the output further to resolve the QML modules to Debian packages.

The workflow is like follows:

The package compiles normally and split to the binary packages. Than dh_qmldeps scans through the package content to find QML content ( .qml files, or qmldirfor QML modules). All founded files will be scanned by qmlimportscanner, the output is a list of depended QML modules. As QML modules have a standardized file path, we can ask the Debian system, which packages ship this file path. We end up with a list of Debian packages in the variable ${qml6:Depends}. This variable can be attached to the list of dependencies of the scanned package. A maintainer can also lower some dependencies to Recommends or Suggest, if needed.

You can find the source code on salsa and usage documentation you can find on https://qt-kde-team.pages.debian.net/dh_qmldeps.html.

The last weeks I now enabled dh_qmldeps for newly every package, that creates a QML6 module package. So the first bugs are solved and it should be usable for more packages.

By scanning with qmlimportscanner trough all code, I found several non-existing QML modules:

  • import QtQuick3DPrivate qt6-multimedia - no Private QML module QTBUG-131753.
  • import QtQuickPrivate qt6-graphs - no Private QML module QTBUG-131754.
  • import QtQuickTimeline qt6-quicktimeline - the correct QML name is QtQuick.Timeline QTBUG-131755.
  • import QtQuickControls2 qt6-webengine - looks like a porting bug as the QML6 modules name is QtQuick.Controls QTBUG-131756.
  • import QtGraphicalEffects kquickimageeditor - the correct name is for QML6 is qt5compat.graphicaleffects, properly as it is an example nobody checks it kquickimageeditor!7.

YEAH - the first milestone is reached. We are able to simply handle QML modules.

But QML applications there is still room for improvement. In apps the QML files are inside the executable. Additionally applications create internal QML modules, that are shipped directly in the same executable. I still search for a good way to analyse an executable to get a list of internal QML modules and a list of included QML files. Any ideas are welcomed :)

As workaround dh_qmldeps scans currently all QML files inside the application source code.

01 December, 2024 12:00AM by Sandro Knauß

November 30, 2024

Dima Kogan

Strava track filtering validation

After years of seeing people's strava tracks, I became convinced that they insufficiently filter the data, resulting in over-estimating the effort. Today I did a bit of lazy analysis, and half-confirmed this: in the one case I looked at, strava reported reasonable elevation gain numbers, but greatly overestimated the distance traveled.

I looked at a single gps track of a long bike ride. This was uploaded to strava manually, as a .gpx file. I can imagine that different things happen if you use the strava app or some device that integrates with the service (the filtering might happen before the data hits the server, and the server could decide to not apply any more filtering).

I processed the data with a simple hysteretic filter, ignoring small changes in position and elevation, trying out different thresholds in the process. I completely ignore the timestamps, and only look at the differences between successive points. This handles the usual GPS noise; it does not handle GPS jumps, which I completely ignore in this analysis. Ignoring these would produce inflated elevation/gain numbers, but I'm working with a looong track, so hopefully this is a small effect.

Clearly this is not scientific, but it's something.

The code

Parsing .gpx is slow (this is a big file), so I cache that into a .vnl:

import sys
import gpxpy

filename_in  = 'INPUT.gpx'
filename_out = 'OUTPUT.gpx'

with open(filename_in, 'r') as f:
    gpx = gpxpy.parse(f)

f_out = open(filename_out, 'w')

tracks = gpx.tracks
if len(tracks) != 1:
    print("I want just one track", file=sys.stderr)
    sys.exit(1)
track = tracks[0]

segments = track.segments
if len(segments) != 1:
    print("I want just one segment", file=sys.stderr)
    sys.exit(1)
segment = segments[0]

time0 = segment.points[0].time
print("# time lat lon ele_m")
for p in segment.points:
    print(f"{(p.time - time0).seconds} {p.latitude} {p.longitude} {p.elevation}",
          file = f_out)

And I process this data with the different filters (this is a silly Python loop, and is slow):

#!/usr/bin/python3

import sys
import numpy as np
import numpysane as nps
import gnuplotlib as gp
import vnlog
import pyproj

geod = None
def dist_ft(lat0,lon0, lat1,lon1):

    global geod
    if geod is None:
        geod = pyproj.Geod(ellps='WGS84')
    return \
        geod.inv(lon0,lat0, lon1,lat1)[2] * 100./2.54/12.




f = 'OUTPUT.gpx'

track,list_keys,dict_key_index = \
    vnlog.slurp(f)

t      = track[:,dict_key_index['time' ]]
lat    = track[:,dict_key_index['lat'  ]]
lon    = track[:,dict_key_index['lon'  ]]
ele_ft = track[:,dict_key_index['ele_m']] * 100./2.54/12.



@nps.broadcast_define( ( (), ()),
                       (2,))
def filter_track(ele_hysteresis_ft,
                 dxy_hysteresis_ft):

    dist        = 0.0
    ele_gain_ft = 0.0

    lon_accepted = None
    lat_accepted = None
    ele_accepted = None

    for i in range(len(lat)):

        if ele_accepted is not None:
            dxy_here  = dist_ft(lat_accepted,lon_accepted, lat[i],lon[i])
            dele_here = np.abs( ele_ft[i] - ele_accepted )

            if dxy_here < dxy_hysteresis_ft and dele_here < ele_hysteresis_ft:
                continue

            if ele_ft[i] > ele_accepted:
                ele_gain_ft += dele_here;

            dist += np.sqrt(dele_here * dele_here +
                            dxy_here  * dxy_here)

        lon_accepted = lon[i]
        lat_accepted = lat[i]
        ele_accepted = ele_ft[i]

    # lose the last point. It simply doesn't matter

    dist_mi = dist / 5280.
    return np.array((ele_gain_ft, dist_mi))




Nele_hysteresis_ft    = 20
ele_hysteresis0_ft    = 5
ele_hysteresis1_ft    = 100
ele_hysteresis_ft_all = np.linspace(ele_hysteresis0_ft,
                                    ele_hysteresis1_ft,
                                    Nele_hysteresis_ft)

Ndxy_hysteresis_ft = 20
dxy_hysteresis0_ft = 5
dxy_hysteresis1_ft = 1000
dxy_hysteresis_ft  = np.linspace(dxy_hysteresis0_ft,
                                 dxy_hysteresis1_ft,
                                 Ndxy_hysteresis_ft)


# shape (Nele,Ndxy,2)
gain,distance = \
    nps.mv( filter_track( nps.dummy(ele_hysteresis_ft_all,-1),
                          dxy_hysteresis_ft),
            -1,0 )


# Stolen from mrcal
def options_heatmap_with_contours( plotoptions, # we update this on output

                                   *,
                                   contour_min           = 0,
                                   contour_max,
                                   contour_increment     = None,
                                   do_contours           = True,
                                   contour_labels_styles = 'boxed',
                                   contour_labels_font   = None):
    r'''Update plotoptions, return curveoptions for a contoured heat map'''

    gp.add_plot_option(plotoptions,
                       'set',
                       ('view equal xy',
                        'view map'))

    if do_contours:
        if contour_increment is None:
            # Compute a "nice" contour increment. I pick a round number that gives
            # me a reasonable number of contours

            Nwant = 10
            increment = (contour_max - contour_min)/Nwant

            # I find the nearest 1eX or 2eX or 5eX
            base10_floor = np.power(10., np.floor(np.log10(increment)))

            # Look through the options, and pick the best one
            m   = np.array((1., 2., 5., 10.))
            err = np.abs(m * base10_floor - increment)
            contour_increment = -m[ np.argmin(err) ] * base10_floor

        gp.add_plot_option(plotoptions,
                           'set',
                           ('key box opaque',
                            'style textbox opaque',
                            'contour base',
                            f'cntrparam levels incremental {contour_max},{contour_increment},{contour_min}'))

        if contour_labels_font is not None:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%d" font "{contour_labels_font}"' )
        else:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%.0f"' )

        plotoptions['cbrange'] = [contour_min, contour_max]

        # I plot 3 times:
        # - to make the heat map
        # - to make the contours
        # - to make the contour labels
        _with = np.array(('image',
                          'lines nosurface',
                          f'labels {contour_labels_styles} nosurface'))
    else:
        gp.add_plot_option(plotoptions, 'unset', 'key')
        _with = 'image'

    using = \
        f'({dxy_hysteresis0_ft}+$1*{float(dxy_hysteresis1_ft-dxy_hysteresis0_ft)/(Ndxy_hysteresis_ft-1)}):' + \
        f'({ele_hysteresis0_ft}+$2*{float(ele_hysteresis1_ft-ele_hysteresis0_ft)/(Nele_hysteresis_ft-1)}):3'
    plotoptions['_3d']     = True
    plotoptions['_xrange'] = [dxy_hysteresis0_ft,dxy_hysteresis1_ft]
    plotoptions['_yrange'] = [ele_hysteresis0_ft,ele_hysteresis1_ft]
    plotoptions['ascii']   = True # needed for using to work

    gp.add_plot_option(plotoptions, 'unset', 'grid')

    return \
        dict( tuplesize=3,
              legend = "", # needed to force contour labels
              using = using,
              _with=_with)




contour_granularity = 1000
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(gain) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(gain) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(gain,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Elevation gain (ft)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed gain vs filtering parameters')


contour_granularity = 10
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(distance) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(distance) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(distance,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Distance (miles)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed distance vs filtering parameters')

Results: gain

Strava says the gain was 46307ft. The analysis says:

strava-gain.png

strava-gain-zoom.png

These show the filtered gain for different values of the distance and gain hysteresis thresholds. The same data is shown at diffent zoom levels. There's no sweet spot, but we get 46307ft with a reasonable amount of filtering. Maybe 46307ft is a bit low even.

Results: distance

Strava says the distance covered was 322 miles. The analysis says:

strava-distance.png

strava-distance-zoom.png

Once again, there's no sweet spot, but we get 322 miles only if we apply no filtering at all. That's clearly too high, and is not reasonable. From the map (and from other people's strava routes) the true distance is closer to 305 miles. Why those people's strava numbers are more believable is anybody's guess.

30 November, 2024 10:48PM by Dima Kogan

Enrico Zini

New laptop setup

My new laptop Framework (Framework Laptop 13 DIY Edition (AMD Ryzen™ 7040 Series)) arrived, all the hardware works out of the box on Debian Stable, and I'm very happy indeed.

This post has the notes of all the provisioning steps, so that I can replicate them again if needed.

Installing Debian 12

Debian 12's installer just worked, with Secure Boot enabled no less, which was nice.

The only glitch is an argument with the guided partitioner, which was uncooperative: I have been hit before by a /boot partition too small, and I wanted 1G of EFI and 1G of boot, while the partitioner decided that 512Mb were good enough. Frustratingly, there was no way of changing that, nor I found how to get more than 1G of swap, as I wanted enough swap to fit RAM for hybernation.

I let it install the way it pleased, then I booted into grml for a round of gparted.

The tricky part of that was resizing the root btrfs filesystem, which is in an LV, which is in a VG, which is in a PV, which is in LUKS. Here's a cheatsheet.

Shrink partitions:

  • mount the root filesystem in /mnt
  • btrfs filesystem resize 6G /mnt
  • umount the root filesystem
  • lvresize -L 7G vgname/lvname
  • pvresize --setphysicalvolumesize /dev/mapper/pvname 8G
  • cryptsetup resize --device-size 9G name

note that I used an increasing size because I don't trust that each tool has a way of representing sizes that aligns to the byte. I'd be happy to find out that they do, but didn't want to find out the hard way that they didn't.

Resize with gparted:

Move and resize partitions at will. Shrinking first means it all takes a reasonable time, and you won't have to wait almost an hour for a terabyte-sized empty partition to be carefully moved around. Don't ask me why I know.

Regrow partitions:

  • cryptsetup resize name
  • pvresize /dev/mapper/pvname
  • lvresize -L 100% vgname/lvname
  • mount the root filesystem in /mnt
  • btrfs filesystem resize max /mnt
  • umount the root filesystem

Setup gnome

When I get a new laptop I have a tradition of trying to make it work with Gnome and Wayland, which normally ended up in frustration and a swift move to X11 and Xfce: I have a lot of long-time muscle memory involved in how I use a computer, and it needs to fit like prosthetics. I can learn to do a thing or two in a different way, but any papercut that makes me break flow and I cannot fix will soon become a dealbreaker.

This applies to Gnome as present in Debian Stable.

General Gnome settings tips

I can list all available settings with:

gsettings list-recursively

which is handy for grepping things like hotkeys.

I can manually set a value with:

gsettings set <schema> <key> <value>

and I can reset it to its default with:

gsettings reset <schema> <key>

Some applications like Gnome Terminal use "relocatable schemas", and in those cases you also need to specify a path, which can be discovered using dconf-editor:

gsettings set <schema>:<path> <key> <value>

Install appindicators

First thing first: app install gnome-shell-extension-appindicator, log out and in again: the Gnome Extension manager won't see the extension as available until you restart the whole session.

I have no idea why that is so, and I have no idea why a notification area is not present in Gnome by default, but at least now I can get one.

Fix font sizes across monitors

My laptop screen and monitor have significantly different DPIs, so:

gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"

And in Settings/Displays, set a reasonable scaling factor for each display.

Disable Alt/Super as hotkey for the Overlay

Seeing all my screen reorganize and reshuffle every time I accidentally press Alt leaves me disoriented and seasick:

gsettings set org.gnome.mutter overlay-key ''

Focus-follows-mouse and Raise-or-lower

My desktop is like my desktop: messy and cluttered. I have lots of overlapping window and I switch between them by moving the focus with the mouse, and when the visible part is not enough I have a handy hotkey mapped to raise-or-lower to bring forward what I need and send back what I don't need anymore.

Thankfully Gnome can be configured that way, with some work:

  • In gnome-shell settings, keyboard, shortcuts, windows, set "Raise window if covered, otherwise lower it" to "Super+Escape"
  • In gnome-tweak-tool, Windows, set "Focus on Hover"

This almost worked, but sometimes it didn't do what I wanted, like I expected to find a window to the front but another window disappeared instead. I eventually figured that by default Gnome delays focus changes by a perceivable amount, which is evidently too slow for the way I move around windows.

The amount cannot be shortened, but it can be removed with:

gsettings set org.gnome.shell.overrides focus-change-on-pointer-rest false

Mouse and keyboard shortcuts

Gnome has lots of preconfigured sounds, shortcuts, animations and other distractions that I do not need. They also either interfere with key combinations I want to use in terminals, or cause accidental window moves or resizes that make me break flow, or otherwise provide sensory overstimulation that really does not work for me.

It was a lot of work, and these are the steps I used to get rid of most of them.

Disable Super+N combinations that accidentally launch a questionable choice of programs:

for i in `seq 1 9`; do gsettings set org.gnome.shell.keybindings switch-to-application-$i '[]'; done

Gnome-Shell settings:

  • Multitasking:
    • disable hot corner
    • disable active edges
    • set a fixed number of workspaces
    • workspaces on all displays
    • switching includes apps from current workspace only
  • Sound:
    • disable system sounds
  • Keyboard
    • Compose Key set to Caps Lock
    • View and Customize Shortcuts:
      • Launchers
        • launch help browser: remove
      • Navigation
        • move to workspace on the left: Super+Left
        • move to workspace on the right: Super+Right
        • move window one monitor …: remove
        • move window one workspace to the left: Shift+Super+Left
        • move window one workspace to the right: Shift+Super+Right
        • move window to …: remove
        • switch system …: remove
        • switch to …: remove
        • switch windows …: disabled
      • Screenshots
        • Record a screenshot interactively: Super+Print
        • Take a screenshot interactively: Print
        • Disable everything else
      • System
        • Focus the active notification: remove
        • Open the applcation menu: remove
        • Restore the keyboard shortctus: remove
        • Show all applications: remove
        • Show the notification list: remove
        • Show the overvire: remove
        • Show the run command prompt: remove (the default Gnome launcher is not for me) Super+F2 (or remove to leave it to the terminal)
      • Windows
        • Close window: remove
        • Hide window: remove
        • Maximize window: remove
        • Move window: remove
        • Raise window if covered, otherwise lower it: Super+Escape
        • Resize window: remove
        • Restore window: remove
        • Toggle maximization state: remove
    • Custom shortcuts
      • xfrun4, launching xfrun4, bound to Super+F2
  • Accessibility:
    • disable "Enable animations"

gnome-tweak-tool settings:

  • Keyboard & Mouse
    • Overview shortcut: Right Super. This cannot be disabled, but since my keyboard doesn't have a Right Super button, that's good enough for me. Oddly, I cannot find this in gsettings.
  • Window titlebars
    • Double-Click: Toggle-Maximize
    • Middle-Click: Lower
    • Secondary-Click: Menu
  • Windows
    • Resize with secondary click

Gnome Terminal settings:

Thankfully 10 years ago I took notes on how to customize Gnome Terminal, and they're still mostly valid:

  • Shortcuts

    • New tab: Super+T
    • New window: Super+N
    • Close tab: disabled
    • Close window: disabled
    • Copy: Super+C
    • Paste: Super+V
    • Search: all disabled
    • Previous tab: Super+Page Up
    • Next tab: Super+Page Down
    • Move tab…: Disabled
    • Switch to tab N: Super+Fn (only available after disabling overview)
    • Switch to tab N with Alt+Fn cannot be configured in the UI: Alt+Fn is detected as simply Fn. It can however be set with gsettings:

      sh for i in `seq 1 12`; do gsettings set org.gnome.Terminal.Legacy.Keybindings:/org/gnome/terminal/legacy/keybindings/ switch-to-tab-$i "<Alt>F$i"; done

  • Profile

    • Text
      • Sound: disable terminal bell

Other hotkeys that got in my way and had to disable the hard way:

for n in `seq 1 12`; do gsettings set org.gnome.mutter.wayland.keybindings switch-to-session-$n '[]'; done
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-down '[]'
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-up '[]'
gsettings set org.gnome.desktop.wm.keybindings panel-main-menu '[]'
gsettings set org.gnome.desktop.interface menubar-accel '[]'

Note that even after removing F10 from being bound to menubar-accel, and after having to gsetting binding to F10 as is:

$ gsettings list-recursively|grep F10
org.gnome.Terminal.Legacy.Keybindings switch-to-tab-10 '<Alt>F10'

I still cannot quit Midnight Commander using F10 in a terminal, as that moves the focus in the window title bar. This looks like a Gnome bug, and a very frustrating one for me.

Appearance

Gnome-Shell settings:

  • Appearance:
    • dark mode

gnome-tweak-tool settings:

  • Fonts
    • Antialiasing: Subpixel
  • Top Bar
    • Clock/Weekday: enable (why is this not a default?)

Gnome Terminal settings:

  • General
    • Theme variant: Dark (somehow it wasn't picked by up from the system settings)
  • Profile
    • Colors
      • Background: #000

Other decluttering and tweaks

Gnome Shell Settings:

  • Search
    • disable application search
  • Removable media
    • set everything to "ask what to do"
  • Default applications
    • Web: Chromium
    • Mail: mutt
    • Calendar: khal is not sadly an option
    • Video: mpv
    • Photos: Geequie

Set a delay between screen blank and lock: when the screen goes blank, it is important for me to be able to say "nope, don't blank yet!", and maybe switch on caffeine mode during a presentation without needing to type my password in front of cameras. No UI for this, but at least gsettings has it:

gsettings set org.gnome.desktop.screensaver lock-delay 30

Extensions

I enabled the Applications Menu extension, since it's impossible to find less famous applications in the Overview without knowing in advance how they're named in the desktop. This stole a precious hotkey, which I had to disable in gsettings:

gsettings set org.gnome.shell.extensions.apps-menu apps-menu-toggle-menu '[]'

I also enabled:

  • Removable Drive Menu: why is this not on by default?
  • Workspace Indicator
  • Ubuntu Appindicators (apt install gnome-shell-extension-appindicator and restart Gnome)

I didn't go and look for Gnome Shell extentions outside what is packaged in Debian, as I'm very wary about running JavaScript code randomly downloaded from the internet with full access over my data and desktop interaction.

I also took care of checking that the Gnome Shell Extensions web page complains about the lacking "GNOME Shell integration" browser extension, because the web browser shouldn't be allowed to download random JavaScript from the internet and run it with full local access.

Yuck.

Run program dialog

The default run program dialog is almost, but not quite, totally useless to me, as it does not provide completion, not even just for executable names in path, and so it ends up being faster to open a new terminal window and type in there.

It's possible, in Gnome Shell settings, to bind a custom command to a key. The resulting keybinding will now show up in gsettings, though it can be located in a more circuitous way by grepping first, and then looking up the resulting path in dconf-editor:

gsettings list-recursively|grep custom-key
org.gnome.settings-daemon.plugins.media-keys custom-keybindings ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/']

I tried out several run dialogs present in Debian, with sad results, possibly due to most of them not being tested on wayland:

  • fuzzel does not start
  • gmrun is gtk2, last updated in 2016, but works fine
  • kupfer segfaults as I type
  • rofi shows, but can't get keboard input
  • shellex shows a white bar at top of the screen and lots of errors on stderr
  • superkb wants to grab the screen for hotkeys
  • synapse searched news on the internet as I typed, which is a big no for me
  • trabucco crashes on startup
  • wofi works but looks like very much an acquired taste, though it has some completion that makes it more useful than Gnome's run dialog
  • xfrun4 (package xfce4-appfinder) struggles on wayland, being unable to center its window and with the pulldown appearing elsewhere in the screen, but it otherwise works

Both gmrun and xfrun4 seem like workable options, with xfrun4 being customizable with convenient shortcut prefixes, so xfrun4 it is.

TODO

  • Figure out what is still binding F10 to menu, and what I can do about it
  • Figure out how to reduce the size of window titlebars, which to my taste should be unobtrusive and not take 2.7% of vertical screen size each. There's a minwaita theme which isn't packaged in Debian. There's a User Theme extension, and then the whole theming can of worms to open. For another day.
  • Figure out if Gnome can be convinced to resize popup windows? Take the Gnome Terminal shortcut preferences for example: it takes ⅓ of the vertical screen and can only display ¼ of all available shortcuts, and I cannot find a valid reason why I shouldn't be allowed to enlarge it vertically.
  • Figure out if I can place shortcut launcher icons in the top panel, and how

I'll try to update these notes as I investigate.

Conclusion so far

I now have something that seems to work for me. A few papercuts to figure out still, but they seem manageable.

It all feels a lot harder than it should be: for something intended to be minimal, Gnome defaults feel horribly cluttered and noisy to me, continuosly getting in the way of getting things done until tamed into being out of the way unless called for. It felt like a device that boots into flashy demo mode, which needs to be switched off before actual use.

Thankfully it can be switched off, and now I have notes to do it again if needed.

gsettings oddly feels to me like a better UI than the interactive settings managers: it's more comprehensive, more discoverable, more scriptable, and more stable across releases. Most of the Q&A I found on the internet with guidance given on the UI was obsolete, while when given with gsettings command lines it kept being relevant. I also have the feeling that these notes would be easier to understand and follow if given as gsettings invocations instead of descriptions of UI navigation paths.

At some point I'll upgrade to Trixie and reevaluate things, and these notes will be a useful checklist for that.

Fingers crossed that this time I'll manage to stay on Wayland. If not, I know that Xfce is still there for me, and I can trust it to be both helpful and good at not getting in the way of my work.

30 November, 2024 08:13PM

hackergotchi for Aurelien Jarno

Aurelien Jarno

UEFI Unified Kernel Image for Debian Installer on riscv64

On the riscv64 port, the default boot method is UEFI, with U-Boot typically used as the firmware. This approach aligns more closely with other architectures, which avoid developping riscv64 specific code. For advanced users, booting using U-Boot and extlinux is possible, thanks to the kernel being built with CONFIG_EFI_STUB=y.

The same applies to the Debian Installer, which is provided as ISO images in various sizes and formats like on other architectures. These images can be put on a USB drive or an SD-card and booted directly from U-Boot in UEFI mode. Some users prefer to use the netboot "image", which in practice consists of a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files.

However, booting this in UEFI mode is not straightforward, unless you use a TFTP server, which is also not trivial. Less known to users, there is also a corresponding mini.iso image, which contains all the above plus a bootloader. This offers a simpler alternative for installation, but depending on your (vendor) U-Boot version this still requires to go through a media.

Systemd version 257-rc2 comes with a great new feature, the ability to include multiple DTB files in a single UKI (Unified Kernel Image) file, with systemd-stub automatically loading the appropriate one for the current hardware. A UKI file combines a UEFI boot stub program, a Linux kernel image, an optional initrd, and further resources in a single UEFI PE file. This finally solves the DTB problem in the UEFI world for distributions, as a single image can work on multiple boards.

Building upon this, debian-installer on riscv64 now also creates a UEFI UKI mini.efi image, which contains systemd-stub, a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files. Using this image also ensures that the system is booted in UEFI mode. Booting it with debian-installer is as simple as:

load mmc 0:1 mini.efo $kernel_addr_r # (can also be done using tftpboot, wget, etc.)
bootefi $kernel_addr_r

Additional parameters can be passed to the image using the U-Boot bootargs environment variable. For instance, to boot in rescue mode:

setenv bootargs "rescue/enable=true"

30 November, 2024 09:41AM by aurel32

Russell Coker

November 29, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAPT 0.0.10: Maintenance

A new version of the RcppAPT package arrived on CRAN earlier today. RcppAPT connects R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands (and their cache) which powering Debian, Ubuntu and other derivative distributions.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

This release moves the C++ compilation standard from C++11 to C++17. I had removed the setting for C++11 last year as compilation ‘by compiler default’ worked well enough. But the version at CRAN still carried, which started to lead to build failures on Debian unstable so it was time for an update. And rather than implicitly relying on C++17 as selected by the last two R releases, we made it explicit. Otherwise a few of the regular package and repository updates have been made, but no new code or features were added The NEWS entries follow.

Changes in version 0.0.10 (2024-11-29)

  • Package maintenance updating continuous integration script versions as well as coverage link from README, and switching to Authors@R

  • C++ compilation standards updated to C++17 to comply with libapt-pkg

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 November, 2024 08:19PM

Raju Devidas

Finding all sub domains of a main domain

Problem: Need to know all the sub domains of a main domain, e.g. example.com has a sub domain dev.example.com , I also want to know other sub domains.

Solution:

Install the package called sublist3r, written by Ahmed Aboul-Ela

$ sudo apt install sublist3r

run the command

$ sublist3r -d example.com -o subdomains-example.com.txt

                 ____        _     _ _     _   _____
                / ___| _   _| |__ | (_)___| |_|___ / _ __
                \___ \| | | | &apos_ \| | / __| __| |_ \| &apos__|
                 ___) | |_| | |_) | | \__ \ |_ ___) | |
                |____/ \__,_|_.__/|_|_|___/\__|____/|_|

                # Coded By Ahmed Aboul-Ela - @aboul3la

[-] Enumerating subdomains now for example.com
[-] Searching now in Baidu..
[-] Searching now in Yahoo..
[-] Searching now in Google..
[-] Searching now in Bing..
[-] Searching now in Ask..
[-] Searching now in Netcraft..
[-] Searching now in DNSdumpster..
[-] Searching now in Virustotal..
[-] Searching now in ThreatCrowd..
[-] Searching now in SSL Certificates..
[-] Searching now in PassiveDNS..
Process DNSdumpster-8:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 269, in run
    domain_list = self.enumerate()
                  ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 649, in enumerate
    token = self.get_csrftoken(resp)
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 644, in get_csrftoken
    token = csrf_regex.findall(resp)[0]
            ~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
[!] Error: Google probably now is blocking our requests
[~] Finished now the Google Enumeration ...
[!] Error: Virustotal probably now is blocking our requests
[-] Saving results to file: subdomains-example.com.txt
[-] Total Unique Subdomains Found: 7
AS207960 Test Intermediate - example.com
www.example.com
dev.example.com
m.example.com
products.example.com
support.example.com
m.testexample.com

We can see the subdomains listed at the end of the command output.

enjoy, have fun, drink water!

29 November, 2024 10:31AM by rajudev

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy December 2024 - March 2025 round.

Patrick Noblet Appiah will work on Automatic Indi-3rd-party driver update, mentored by Thorsten Alteholz.

Divine Attah-Ohiemi will work on Making the Debian main website more attractive by switching to HuGo as site generator, mentored by Carsten Schoenert, Subin Siby and Thomas Lange.


Congratulations and welcome Patrick Noblet Appiah and Divine Attah-Ohiemi!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

29 November, 2024 09:15AM by Nilesh Patra

Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy December 2024 - March 2025 round.

Patrick Noblet Appiah will work on Automatic Indi-3rd-party driver update, mentored by Thorsten Alteholz.

Divine Attah-Ohiemi will work on Making the Debian main website more attractive by switching to HuGo as site generator, mentored by Carsten Schoenert, Subin Siby and Thomas Lange.


Congratulations and welcome Patrick Noblet Appiah and Divine Attah-Ohiemi!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

29 November, 2024 09:00AM by Nilesh Patra

Russ Allbery

Review: The Duke Who Didn't

Review: The Duke Who Didn't, by Courtney Milan

Series: Wedgeford Trials #1
Publisher: Femtopress
Copyright: September 2020
ASIN: B08G4QC3JC
Format: Kindle
Pages: 334

The Duke Who Didn't is a Victorian romance novel, the first of a loosely-connected trilogy in the romance sense of switching protagonists between books. It's self-published, but by Courtney Milan, so the quality of the editing and publishing is about as high as you will see for a self-published novel.

Chloe Fong has a goal: to make her father's sauce the success that it should be. His previous version of the recipe was stolen by White and Whistler and is now wildly popular as Pure English Sauce. His current version is much better. In a few days, tourists will come from all over England to the annual festival of the Wedgeford Trials, and this will be Chloe's opportunity to give the sauce a proper debut and marketing push. There is only the small matter of making enough sauce and coming up with a good name. Chloe is very busy and absolutely does not have time for nonsense. Particularly nonsense in the form of Jeremy Yu.

Jeremy started coming to the Wedgeford Trials at the age of twelve. He was obviously from money and society, obviously enough that the villagers gave him the nickname Posh Jim after his participation in the central game of the trials. Exactly how wealthy and exactly which society, however, is something that he never quite explained, at first because he was having too much fun and then because he felt he'd waited too long. The village of Wedgeford was thriving under the benevolent neglect of its absent duke and uncollected taxes, and no one who loved it had any desire for that to change. Including Jeremy, the absent duke in question.

Jeremy had been in love with Chloe for years, but the last time he came to the Trials, Chloe told him to stop pursuing her unless he could be serious. That was three years and three Trials ago, and Chloe was certain Jeremy had made his choice by his absence. But Jeremy never forgot her, and despite his utter failure to become a more serious person, he is determined to convince her that he is serious about her. And also determined to finally reveal his identity without breaking everything he loves about the village. Somehow.

I have mentioned in other reviews that I mostly read sapphic instead of heterosexual romance because the gender roles in heterosexual romance are much more likely to irritate me. It occurred to me that I was probably being unfair to the heterosexual romance genre, I hadn't read nearly widely enough to draw any real conclusions, and I needed to find better examples. I've followed Courtney Milan occasionally on social media (for reasons unrelated to her novels) for long enough to know that she was unlikely to go for gender essentialism, and I'd been meaning to try one of her books for a while. Hence this novel.

It is indeed not gender-essentialist. Neither Chloe nor Jeremy fit into obvious gender boxes. Chloe is the motivating force in the novel and many of their interactions were utterly charming. But, despite that, the gender roles still annoyed me in ways that are entirely not the fault of this book. I'm not sure I can even put a finger on something specific. It's a low-grade, pervasive feeling that men do one type of thing and women do a different type of thing, and even if these characters don't stick to that closely, it saturates the vibes. (Admittedly, a Victorian romance was probably not the best choice when I knew this was my biggest problem with genre heterosexual romance. It was just what I had on hand.)

The conceit of the Wedgeford Trials series is that the small village of Wedgeford in England, through historical accident, ended up with an unusually large number of residents with Chinese ancestry. This is what I would call a "believable outlier": there was not such a village so far as I know, but there could well have been. At the least, there were way more people with non-English ancestry, including east Asian ancestry, in Victorian England than modern readers might think. There is quite a lot in this novel about family history, cultural traditions, immigration, and colonialism that I'm wholly unqualified to comment on but that was fascinating to read about and seemed (as one would expect from Milan) adroitly written.

As for the rest of the story, The Duke Who Didn't is absolutely full of banter. If your idea of a good time with a romance novel is teasing, word play, mock irritation, and endless verbal fencing as a way to avoid directly confronting difficult topics, you will be in heaven. Jeremy is one of those people who is way too much in his own head and has turned his problems into a giant ball of anxiety, but who is good at being the class clown, and therefore leans heavily on banter and making people laugh (or blush) as a way of avoiding whatever he's anxious about. I thought the characterization was quite good, but I admit I still got a bit tired of it. 350 pages is a lot of banter, particularly when the characters have some serious communication problems they need to resolve, and to fully enjoy this book you have to have a lot of patience for Jeremy's near-pathological inability to be forthright with Chloe.

Chloe's most charming characteristic is that she makes lists, particularly to-do lists. Her ideal days proceed as an orderly process of crossing things off of lists, and her way to approach any problem is to make a list. This is a great hook, and extremely relatable, but if you're going to talk this much about her lists, I want to see the lists! Chloe is all about details; show me the details! This book does not contain anywhere close to enough of Chloe's lists. I'm not sure there was a single list in this book that the reader both got to see the details of and that made it to more than three items. I think Chloe would agree that it's pointless to talk about the concept of lists; one needs to commit oneself to making an actual list.

This book I would unquestioningly classify as romantic comedy (which given my utter lack of familiarity with romance subgenres probably means that it isn't). Jeremy's standard interaction style with anyone is self-deprecating humor, and Chloe is the sort of character who is extremely serious in ways that strike other people as funny. Towards the end of the book, there is a hilarious self-aware subversion of a major romance novel trope that even I caught, despite my general lack of familiarity with the genre. The eventual resolution of Jeremy's problem of hidden identity caught me by surprise in that way where I should have seen it all along, and was both beautifully handled and quite entertaining.

All the pieces are here for a great time, and I think a lot of people would love this book. Somehow, it still wasn't quite my thing; I thoroughly enjoyed parts of it, but I don't find myself eager to read another. I'm kind of annoyed at myself that it didn't pull me in, since if I'd liked this I know where to find lots more like it. But ah well.

If you like banter-heavy heterosexual romance that is very self-aware about its genre without devolving into metafiction, this is at least worth a try.

Followed in the romance series way by The Marquis Who Mustn't, but this is a complete story with a satisfying ending.

Rating: 7 out of 10

29 November, 2024 06:32AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Tryton 7.0 LTS reaches Debian trixie (by Mathias Behrle, Raphaël Hertzog and Anupa Ann Joseph)

Tryton is a FOSS software suite which is highly modular and scalable. Tryton along with its standard modules can provide a complete ERP solution or it can be used for specific functions of a business like accounting, invoicing etc.

Debian packages for Tryton are being maintained by Mathias Behrle. You can follow him on Mastodon or get his help on Tryton related projects through MBSolutions (his own consulting company).

Freexian has been sponsoring Mathias’s packaging work on Tryton for a while, so that Debian gets all the quarterly bug fix releases as well as the security release in a timely manner.

About Tryton 7.0 LTS

Lately Mathias has been busy packaging Tryton 7.0 LTS. As the “LTS” tag implies, this release is recommended for production deployments since it will be supported until November 2028. This release brings numerous bug fixes, performance improvements and various new features.

As part of this work, 41 new Tryton modules and 3 dependency packages have been added to Debian, significantly broadening the options available to Debian users and improving integration with Tryton systems.

Running different versions of Tryton on different Debian releases

To provide extended compatibility, a dedicated Tryton mirror is being managed and is available at https://debian.m9s.biz/debian/. This mirror hosts backports for all supported Tryton series, ensuring availability for a variety of Debian releases and deployment scenarios.

These initiatives highlight MBSolutions’ technical contributions to the Tryton community, made possible by Freexian’s financial backing. Together, we are advancing the Tryton ecosystem for Debian users.

29 November, 2024 12:00AM by Mathias Behrle, Raphaël Hertzog and Anupa Ann Joseph

November 28, 2024

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (September and October 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Joachim Bauch (fancycode)
  • Alexander Kjäll (capitol)
  • Jan Mojžíš (janmojzis)
  • Xiao Sheng Wen (atzlinux)

The following contributors were added as Debian Maintainers in the last two months:

  • Alberto Bertogli
  • Alexis Murzeau
  • David Heilderberg
  • Xiyue Deng
  • Kathara Sasikumar
  • Philippe Swartvagher

Congratulations!

28 November, 2024 05:00PM by Jean-Pierre Giraud

November 27, 2024

OpenStreetMap migrates to Debian 12

You may have seen this toot announcing OpenStreetMap's migration to Debian on their infrastructure.

🚀 After 18 years on Ubuntu, we've upgraded the @openstreetmap servers to Debian 12 (Bookworm). 🌍 openstreetmap.org is now faster using Ruby 3.1. Onward to new mapping adventures! Thank you to the team for the smooth transition. #OpenStreetMap #Debian 🤓

We spoke with Grant Slater, the Senior Site Reliability Engineer for the OpenStreetMap Foundation. Grant shares:

Why did you choose Debian?

There is a large overlap between OpenStreetMap mappers and the Debian community. Debian also has excellent coverage of OpenStreetMap tools and utilities, which helped with the decision to switch to Debian.

The Debian package maintainers do an excellent job of maintaining their packages - e.g.: osm2pgsql, osmium-tool etc.

Part of our reason to move to Debian was to get closer to the maintainers of the packages that we depend on. Debian maintainers appear to be heavily invested in the software packages that they support and we see critical bugs get fixed.

What drove this decision to migrate?

OpenStreetMap.org is primarily run on actual physical hardware that our team manages. We attempt to squeeze as much performance from our systems as possible, with some services being particularly I/O bound. We ran into some severe I/O performance issues with kernels ~6.0 to < ~6.6 on systems with NVMe storage. This pushed us onto newer mainline kernels, which led us toward Debian. On Debian 12 we could simply install the backport kernel and the performance issues were solved.

How was the transition managed?

Thankfully we manage our server setup nearly completely with code. We also use Test Kitchen with inspec to test this infrastructure code. Tests run locally using Podman or Docker containers, but also run as part of our git code pipeline.

We added Debian as a test target platform and fixed up the infrastructure code until all the tests passed. The changes required were relatively small, simple package name or config filename changes mostly.

What was your timeline of transition?

In August 2024 we moved the www.openstreetmap.org Ruby on Rails servers across to Debian. We haven't yet finished moving everything across to Debian, but we will upgrade the rest when it makes sense. Some systems may wait until the next hardware upgrade cycle.

Our focus is to build a stable and reliable platform for OpenStreetMap mappers.

How has the transition from another Linux distribution to Debian gone?

We are still in the process of fully migrating between Linux distributions, but we can share that we recently moved our frontend servers to Debian 12 (from Ubuntu 22.04) which bumped the Ruby version from 3.0 to 3.1 which allowed us to also upgrade the version of Ruby on Rails that we use for www.openstreetmap.org.

We also changed our chef code for managing the network interfaces from using netplan (default in Ubuntu, made by Canonical) to directly using systemd-networkd to manage the network interfaces, to allow commonality between how we manage the interfaces in Ubuntu and our upcoming Debian systems. Over the years we've standardised our networking setup to use 802.3ad bonded interfaces for redundancy, with VLANs to segment traffic; this setup worked well with systemd-networkd.

We use netboot.xyz for PXE networking booting OS installers for our systems and use IPMI for the out-of-band management. We remotely re-installed a test server to Debian 12, and fixed a few minor issues missed by our chef tests. We were pleasantly surprised how smoothly the migration to Debian went.

In a few limited cases we've used Debian Backports for a few packages where we've absolutely had to have a newer feature. The Debian package maintainers are fantastic.

What definitely helped us is our code is libre/free/open-source, with most of the core OpenStreetMap software like osm2pgsql already in Debian and well packaged.

In some cases we do run pre-release or custom patches of OpenStreetMap software; with Ubuntu we used launchpad.net's Personal Package Archives (PPA) to build and host deb repositories for these custom packages. We were initially perplexed by the myriad of options in Debian (see this list - eeek!), but received some helpful guidance from a Debian contributor and we now manage our own deb repository using aptly. For the moment we're currently building deb packages locally and pushing to aptly; ideally we'd like to replace this with a git driven pipeline for building the custom packages in the future.

Thank you for taking the time to share your experience with us.

Thank you to all the awesome people who make Debian!


We are overjoyed to share this in-use case which demonstrates our commitment to stability, development, and long term support. Debian offers users, companies, and organisations the ability to plan, scope, develop, and maintain at their own pace using a rock solid stable Linux distribution with responsive developers.

Does your organisation use Debian in some capacity? We would love to hear about it and your use of 'The Universal Operating System'. Reach out to us at Press@debian.org - we would be happy to add your organisation to our 'Who's Using Debian?' page and to share your story!

About Debian

The Debian Project is an association of individuals who have made common cause to create a free operating system. This operating system that we have created is called Debian. Installers and images, such as live systems, offline installers for systems without a network connection, installers for other CPU architectures, or cloud instances, can be found at Getting Debian.

27 November, 2024 01:01PM by Donald Norwood, Stefano Rivera, Justin B Rye

November 26, 2024

Emanuele Rocca

Building Debian packages The Right Way

There is more than one way to do it, but it seems that The Right Way to build Debian packages today is using sbuild with the unshare backend. The most common backend before the rise of unshare was schroot.

The official Debian Build Daemons have recently transitioned to using sbuild with unshare, providing a strong motivation to consider making the switch. Additionally the new approach means: (1) no need to configure schroot, and (2) no need to run the build as root.

Here are my notes about moving to the new setup, for future reference and in case they may be useful to others.

First I installed the required packages:

apt install sbuild mmdebstrap uidmap

Then I created the following script to update my chroots every night:

#!/bin/bash

for arch in arm64 armhf armel; do
    HOME=/tmp mmdebstrap --quiet --arch=$arch --include=ca-certificates --variant=buildd unstable \
        ~/.cache/sbuild/unstable-$arch.tar http://127.0.0.1:3142/debian
done

In the script, I’m calling mmdebstrap with --quiet because I don’t want to get any output on succesful execution. The script is running in cron with email notifications, and I only want to get a message if something goes south. I’m setting HOME=/tmp for a similar reason: the unshare user does not have access to my actual home directory, and by default dpkg tries to use $HOME/.dpkg.cfg as the configuration file. By overriding HOME I avoid the following message to standard error:

dpkg: warning: failed to open configuration file '/home/ema/.dpkg.cfg' for reading: Permission denied

Then I added the following to my sbuild configuration file (~/.sbuildrc):

$chroot_mode = 'unshare';
$unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXXXX';

The first option sets the sbuild backend to unshare, whereas unshare_tmpdir_template is needed on Bookworm to ensure that the build process runs in memory rather than on disk for performance reasons. Starting with Trixie, /tmp is by default a tmpfs so the setting won’t be needed anymore.

Packages for different architectures can now be built as follows:

# Tarball used: ~/.cache/sbuild/unstable-arm64.tar
$ sbuild --dist=unstable hello

# Tarball used: ~/.cache/sbuild/unstable-armhf.tar
$ sbuild --dist=unstable --arch=armhf hello

If you have any comments or suggestions about any of this, please let me know.

26 November, 2024 08:34AM by Emanuele Rocca (ema@linux.it)

hackergotchi for Sandro Knauß

Sandro Knauß

Akademy 2024 in Würzburg

In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working.

I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary.

With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826.

With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656.

Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning.

There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy.

In detail I found some small things to improve:

  • If you search for a bus ride and enter the correct name for the bus stop, it will still add some walk from and to the station. The issue here is that we use different backends and not all backends share the same geo coordinate. That's why Itinerary needs to add some heuristics to delete those paths. walk to and from the bus stop

  • Instead of displaying just a small station map of one bus stop in the inner city, it showed complete Würzburg inner city, as there is one big park around the inner city (named "Ringpark").

  • Würzburg has a quite big bus station but the platform information were missing in the map, so we tweaked the CSS to display the platform. To be sure, that we don't fix only Würzburg, we also looked at Greifswald and Aix-en-Provence if they are following the same name scheme.

I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS.

This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining.

I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan.

At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40.

The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps.

I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs.

The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible.

a comic page with Yoko Tsuno in Rothenburg ob der Tauber

In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM.

The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of Würzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden.

Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train.

By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)

26 November, 2024 12:00AM by Sandro Knauß

November 24, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.23 released

I've just released version 1.1.23 of plocate, almost a year after 1.1.22. The changes are mostly around the systemd unit this time, but perhaps more interestingly is that this is the first release where I don't have the majority of patches; in fact, I don't have any patches at all. All of them came from contributors, many of them through the “just do git push to send me a patch email” interface.

I guess this means that I'll need to actually start streamlining my “git am” workflow… it gets me every time. :-)

24 November, 2024 10:27PM

Edward Betts

A mini adventure at MiniDebConf Toulouse

A mini adventure at MiniDebConf Toulouse

Last week, I ventured to Toulouse, for a delightful mix of coding, conversation, and crepes at MiniDebConf Toulouse, part of the broader Capitole du Libre conference, akin to the more well-known FOSDEM but with a distinctly French flair.

This was my fourth and final MiniDebConf of the year.

no jet bridge

My trek to Toulouse was seamless. I hopped on a bus from my home in Bristol to the airport, then took a short flight. I luxuriated in seat 1A, making me the first to disembark—a mere ten minutes later, I was already on the bus heading to my hotel.

Exploring the Pink City

pink

img 29

duck shop

Once settled, I wasted no time exploring the charms of Toulouse. Just a short stroll from my hotel, I found myself beside a tranquil canal, its waters mirroring the golden hues of the trees lining its banks. Autumn in Toulouse painted the city in warm oranges and reds, creating a picturesque backdrop that was a joy to wander through. Every corner of the street revealed more of the city's rich cultural tapestry and striking architecture. Known affectionately as 'La Ville Rose' (The Pink City) for its unique terracotta brickwork, Toulouse captivated me with its blend of historical allure and vibrant modern life.

MiniDebCamp

FabLab sign

laptop setup

Prior to the main event, the MiniDebCamp provided two days of hacking at Artilect FabLab—a space as creative as it was welcoming. It was a pleasure to reconnect with familiar faces and forge new friendships.

Culinary delights

lunch 1

img 14

img 15

img 16

img 17

cakes

The hospitality was exceptional. Our lunches boasted a delicious array of quiches, an enticing charcuterie board, and a superb selection of cheeses, all perfectly complemented by exquisite petite fours. Each item was not only a feast for the eyes but also a delight for the palate.

Wine and cheese

wine and cheese 1

wine and cheese 2

Leftovers from these gourmet feasts fuelled our impromptu cheese and wine party on Thursday evening—a highlight where informal chats blended seamlessly with serious software discussions.

The river at night

night river 1

night river 2

night river 3

night river 4

The enchantment of Toulouse doesn't dim with the setting sun; instead, it transforms. My evening strolls took me along the banks of the Garonne, under a sky just turning from twilight to velvet blue. The river, a dark mirror, perfectly reflected the illuminated grandeur of the city's architecture. Notably, the dome of the Hôpital de La Grave stood out, bathed in a warm glow against the night sky. This architectural gem, coupled with the soft lights of the bridge and the serene river, created a breathtaking scene that was both tranquil and awe-inspiring.

Capitole du Libre

making crepes

The MiniDebConf itself, part of the larger Capitole du Libre event, was a fantastic immersion into the world of free software. Unlike the ticket-free FOSDEM, this conference required QR codes for entry and even had bag searches, adding an unusual layer of security for a software conference.

Highlights included the crepe-making by the organisers, reminiscent of street food scenes from larger festivals. The availability of crepes for MiniDebConf attendees and the presence of food trucks added a festive air, albeit with the inevitable long queues familiar to any festival-goer.

vélôToulouse

bike

cyclocity

The city's bike rental system was a boon—easy to use with handy bike baskets perfect for casual city touring. I chose pedal power over electric, finding it a pleasant way to navigate the streets and absorb the city's vibrant atmosphere.

Markets

market

flatbreads

Toulouse's markets were a delightful discovery. From a spontaneous visit to a market near my hotel upon arrival, to cycling past bustling marketplaces, each day presented new local flavours and crafts to explore.

The Za'atar flatbread from a Syrian stall was a particularly memorable lunch pick.

La brasserie Les Arcades

img 25

img 26

img 27

Our conference wrapped up with a spontaneous gathering at La Brasserie Les Arcades in Place du Capitole. Finding a café that could accommodate 30 of us on a Sunday evening without a booking felt like striking gold. What began with coffee and ice cream smoothly transitioned into dinner, where I enjoyed a delicious braised duck leg with green peppercorn sauce. This meal rounded off the trip with lively conversations and shared experiences.

The journey back home

img 30

img 31

img 32

img 33

Returning from Toulouse, I found myself once again in seat 1A, offering the advantage of being the first off the plane, both on departure and arrival. My flight touched down in Bristol ahead of schedule, and within ten minutes, I was on the A1 bus, making my way back into the heart of Bristol.

Anticipating DebConf 25 in Brittany

My trip to Toulouse for MiniDebConf was yet another fulfilling experience; the city was delightful, and the talks were insightful. While I frequently travel, these journeys are more about continuous learning and networking than escape. The food in Toulouse was particularly impressive, a highlight I've come to expect and relish on my trips to France. Looking ahead, I'm eagerly anticipating DebConf in Brest next year, especially the opportunity to indulge once more in the excellent French cuisine and beverages.

24 November, 2024 12:42PM

November 22, 2024

hackergotchi for Matthew Palmer

Matthew Palmer

Your Release Process Sucks

For the past decade-plus, every piece of software I write has had one of two release processes.

Software that gets deployed directly onto servers (websites, mostly, but also the infrastructure that runs Pwnedkeys, for example) is deployed with nothing more than git push prod main. I’ll talk more about that some other day.

Today is about the release process for everything else I maintain – Rust / Ruby libraries, standalone programs, and so forth. To release those, I use the following, extremely intricate process:

  1. Create an annotated git tag, where the name of the tag is the software version I’m releasing, and the annotation is the release notes for that version.

  2. Run git release in the repository.

  3. There is no step 3.

Yes, it absolutely is that simple. And if your release process is any more complicated than that, then you are suffering unnecessarily.

But don’t worry. I’m from the Internet, and I’m here to help.

Sidebar: “annotated what-now?!?”

The annotated tag is one git’s best-kept secrets. They’ve been available in git for practically forever (I’ve been using them since at least 2014, which is “practically forever” in software development), yet almost everyone I mention them to has never heard of them.

A “tag”, in git parlance, is a repository-unique named label that points to a single commit (as identified by the commit’s SHA1 hash). Annotating a tag is simply associating a block of free-form text with that tag.

Creating an annotated tag is simple-sauce: git tag -a tagname will open up an editor window where you can enter your annotation, and git tag -a -m "some annotation" tagname will create the tag with the annotation “some annotation”. Retrieving the annotation for a tag is straightforward, too: git show tagname will display the annotation along with all the other tag-related information.

Now that we know all about annotated tags, let’s talk about how to use them to make software releases freaking awesome.

Step 1: Create the Annotated Git Tag

As I just mentioned, creating an annotated git tag is pretty simple: just add a -a (or --annotate, if you enjoy typing) to your git tag command, and WHAM! annotation achieved.

Releases, though, typically have unique and ever-increasing version numbers, which we want to encode in the tag name. Rather than having to look at the existing tags and figure out the next version number ourselves, we can have software do the hard work for us.

Enter: git-version-bump. This straightforward program takes one mandatory argument: major, minor, or patch, and bumps the corresponding version number component in line with Semantic Versioning principles. If you pass it -n, it opens an editor for you to enter the release notes, and when you save out, the tag is automagically created with the appropriate name.

Because the program is called git-version-bump, you can call it as a git command: git version-bump. Also, because version-bump is long and unwieldy, I have it aliased to vb, with the following entry in my ~/.gitconfig:

[alias]
    vb = version-bump -n

Of course, you don’t have to use git-version-bump if you don’t want to (although why wouldn’t you?). The important thing is that the only step you take to go from “here is our current codebase in main” to “everything as of this commit is version X.Y.Z of this software”, is the creation of an annotated tag that records the version number being released, and the metadata that goes along with that release.

Step 2: Run git release

As I said earlier, I’ve been using this release process for over a decade now. So long, in fact, that when I started, GitHub Actions didn’t exist, and so a lot of the things you’d delegate to a CI runner these days had to be done locally, or in a more ad-hoc manner on a server somewhere.

This is why step 2 in the release process is “run git release”. It’s because historically, you can’t do everything in a CI run. Nowadays, most of my repositories have this in the .git/config:

[alias]
    release = push --tags

Older repositories which, for one reason or another, haven’t been updated to the new hawtness, have various other aliases defined, which run more specialised scripts (usually just rake release, for Ruby libraries), but they’re slowly dying out.

The reason why I still have this alias, though, is that it standardises the release process. Whether it’s a Ruby gem, a Rust crate, a bunch of protobuf definitions, or whatever else, I run the same command to trigger a release going out. It means I don’t have to think about how I do it for this project, because every project does it exactly the same way.

The Wiring Behind the Button

It wasn’t the button that was the problem. It was the miles of wiring, the hundreds of miles of cables, the circuits, the relays, the machinery. The engine was a massive, sprawling, complex, mind-bending nightmare of levers and dials and buttons and switches. You couldn’t just slap a button on the wall and expect it to work. But there should be a button. A big, fat button that you could press and everything would be fine again. Just press it, and everything would be back to normal.

  • Red Dwarf: Better Than Life

Once you’ve accepted that your release process should be as simple as creating an annotated tag and running one command, you do need to consider what happens afterwards. These days, with the near-universal availability of CI runners that can do anything you need in an isolated, reproducible environment, the work required to go from “annotated tag” to “release artifacts” can be scripted up and left to do its thing.

What that looks like, of course, will probably vary greatly depending on what you’re releasing. I can’t really give universally-applicable guidance, since I don’t know your situation. All I can do is provide some of my open source work as inspirational examples.

For starters, let’s look at a simple Rust crate I’ve written, called strong-box. It’s a straightforward crate, that provides ergonomic and secure cryptographic functionality inspired by the likes of NaCl. As it’s just a crate, its release script is very straightforward. Most of the complexity is working around Cargo’s inelegant mandate that crate version numbers are specified in a TOML file. Apart from that, it’s just a matter of building and uploading the crate. Easy!

Slightly more complicated is action-validator. This is a Rust CLI tool which validates GitHub Actions and Workflows (how very meta) against a published JSON schema, to make sure you haven’t got any syntax or structural errors. As not everyone has a Rust toolchain on their local box, the release process helpfully build binaries for several common OSes and CPU architectures that people can download if they choose. The release process in this case is somewhat larger, but not particularly complicated. Almost half of it is actually scaffolding to build an experimental WASM/NPM build of the code, because someone seemed rather keen on that.

Moving away from Rust, and stepping up the meta another notch, we can take a look at the release process for git-version-bump itself, my Ruby library and associated CLI tool which started me down the “Just Tag It Already” rabbit hole many years ago. In this case, since gemspecs are very amenable to programmatic definition, the release process is practically trivial. Remove the boilerplate and workarounds for GitHub Actions bugs, and you’re left with about three lines of actual commands.

These approaches can certainly scale to larger, more complicated processes. I’ve recently implemented annotated-tag-based releases in a proprietary software product, that produces Debian/Ubuntu, RedHat, and Windows packages, as well as Docker images, and it takes all of the information it needs from the annotated tag. I’m confident that this approach will successfully serve them as they expand out to build AMIs, GCP machine images, and whatever else they need in their release processes in the future.

Objection, Your Honour!

I can hear the howl of the “but, actuallys” coming over the horizon even as I type. People have a lot of Big Feelings about why this release process won’t work for them. Rather than overload this article with them, I’ve created a companion article that enumerates the objections I’ve come across, and answers them. I’m also available for consulting if you’d like a personalised, professional opinion on your specific circumstances.

DVD Bonus Feature: Pre-releases

Unless you’re addicted to surprises, it’s good to get early feedback about new features and bugfixes before they make it into an official, general-purpose release. For this, you can’t go past the pre-release.

The major blocker to widespread use of pre-releases is that cutting a release is usually a pain in the behind. If you’ve got to edit changelogs, and modify version numbers in a dozen places, then you’re entirely justified in thinking that cutting a pre-release for a customer to test that bugfix that only occurs in their environment is too much of a hassle.

The thing is, once you’ve got releases building from annotated tags, making pre-releases on every push to main becomes practically trivial. This is mostly due to another fantastic and underused Git command: git describe.

How git describe works is, basically, that it finds the most recent commit that has an associated annotated tag, and then generates a string that contains that tag’s name, plus the number of commits between that tag and the current commit, with the current commit’s hash included, as a bonus. That is, imagine that three commits ago, you created an annotated release tag named v4.2.0. If you run git describe now, it will print out v4.2.0-3-g04f5a6f (assuming that the current commit’s SHA starts with 04f5a6f).

You might be starting to see where this is going. With a bit of light massaging (essentially, removing the leading v and replacing the -s with .s), that string can be converted into a version number which, in most sane environments, is considered “newer” than the official 4.2.0 release, but will be superceded by the next actual release (say, 4.2.1 or 4.3.0). If you’re already injecting version numbers into the release build process, injecting a slightly different version number is no work at all.

Then, you can easily build release artifacts for every commit to main, and make them available somewhere they won’t get in the way of the “official” releases. For example, in the proprietary product I mentioned previously, this involves uploading the Debian packages to a separate component (prerelease instead of main), so that users that want to opt-in to the prerelease channel simply modify their sources.list to change main to prerelease. Management have been extremely pleased with the easy availability of pre-release packages; they’ve been gleefully installing them willy-nilly for testing purposes since I rolled them out.

In fact, even while I’ve been writing this article, I was asked to add some debug logging to help track down a particularly pernicious bug. I added the few lines of code, committed, pushed, and went back to writing. A few minutes later (next week’s job is to cut that in-process time by at least half), the person who asked for the extra logging ran apt update; apt upgrade, which installed the newly-built package, and was able to progress in their debugging adventure.

Continuous Delivery: It’s Not Just For Hipsters.

“+1, Informative”

Hopefully, this has spurred you to commit your immortal soul to the Church of the Annotated Tag. You may tithe by buying me a refreshing beverage. Alternately, if you’re really keen to adopt more streamlined release management processes, I’m available for consulting engagements.

22 November, 2024 08:15PM by Matt Palmer (mpalmer@hezmatt.org)

Invalid Excuses for Why Your Release Process Sucks

In my companion article, I made the bold claim that your release process should consist of no more than two steps:

  1. Create an annotated Git tag;

  2. Run a single command to trigger the release pipeline.

As I have been on the Internet for more than five minutes, I’m aware that a great many people will have a great many objections to this simple and straightforward idea. In the interests of saving them a lot of wear and tear on their keyboards, I present this list of common reasons why these objections are invalid.

If you have an objection I don’t cover here, the comment box is down the bottom of the article. If you think you’ve got a real stumper, I’m available for consulting engagements, and if you turn out to have a release process which cannot feasibly be reduced to the above two steps for legitimate technical reasons, I’ll waive my fees.

“But I automatically generate my release notes from commit messages!”

This one is really easy to solve: have the release note generation tool feed directly into the annotation. Boom! Headshot.

“But all these files need to be edited to make a release!”

No, they absolutely don’t. But I can see why you might think you do, given how inflexible some packaging environments can seem, and since “that’s how we’ve always done it”.

Language Packages

Most languages require you to encode the version of the library or binary in a file that you want to revision control. This is teh suck, but I’m yet to encounter a situation that can’t be worked around some way or another.

In Ruby, for instance, gemspec files are actually executable Ruby code, so I call code (that’s part of git-version-bump, as an aside) to calculate the version number from the git tags. The Rust build tool, Cargo, uses a TOML file, which isn’t as easy, but a small amount of release automation is used to take care of that.

Distribution Packages

If you’re building Linux distribution packages, you can easily apply similar automation faffery. For example, Debian packages take their metadata from the debian/changelog file in the build directory. Don’t keep that file in revision control, though: build it at release time. Everything you need to construct a Debian (or RPM) changelog is in the tag – version numbers, dates, times, authors, release notes. Use it for much good.

The Dreaded Changelog

Finally, there’s the CHANGELOG file. If it’s maintained during the development process, it typically has an archive of all the release notes, under version numbers, with an “Unreleased” heading at the top. It’s one more place to remember to have to edit when making that “preparing release X.Y.Z” commit, and it is a gift to the Demon of Spurious Merge Conflicts if you follow the policy of “every commit must add a changelog entry”.

My solution: just burn it to the ground. Add a line to the top with a link to wherever the contents of annotated tags get published (such as GitHub Releases, if that’s your bag) and never open it ever again.

“But I need to know other things about my release, too!”

For some reason, you might think you need some other metadata about your releases. You’re probably wrong – it’s amazing how much information you can obtain or derive from the humble tag – so think creatively about your situation before you start making unnecessary complexity for yourself.

But, on the off chance you’re in a situation that legitimately needs some extra release-related information, here’s the secret: structured annotation. The annotation on a tag can be literally any sequence of octets you like. How that data is interpreted is up to you.

So, require that annotations on release tags use some sort of structured data format (say YAML or TOML – or even XML if you hate your release manager), and mandate that it contain whatever information you need. You can make sure that the annotation has a valid structure and contains all the information you need with an update hook, which can reject the tag push if it doesn’t meet the requirements, and you’re sorted.

“But I have multiple packages in my repo, with different release cadences and versions!”

This one is common enough that I just refer to it as “the monorepo drama”. Personally, I’m not a huge fan of monorepos, but you do you, boo. Annotated tags can still handle it just fine.

The trick is to include the package name being released in the tag name. So rather than a release tag being named vX.Y.Z, you use foo/vX.Y.Z, bar/vX.Y.Z, and baz/vX.Y.Z. The release automation for each package just triggers on tags that match the pattern for that particular package, and limits itself to those tags when figuring out what the version number is.

“But we don’t semver our releases!”

Oh, that’s easy. The tag pattern that marks a release doesn’t have to be vX.Y.Z. It can be anything you want.

Relatedly, there is a (rare, but existent) need for packages that don’t really have a conception of “releases” in the traditional sense. The example I’ve hit most often is automatically generated “bindings” packages, such as protobuf definitions. The source of truth for these is a bunch of .proto files, but to be useful, they need to be packaged into code for the various language(s) you’re using. But those packages need versions, and while someone could manually make releases, the best option is to build new per-language packages automatically every time any of those definitions change.

The versions of those packages, then, can be datestamps (I like something like YYYY.MM.DD.N, where N starts at 0 each day and increments if there are multiple releases in a single day).

This process allows all the code that needs the definitions to declare the minimum version of the definitions that it relies on, and everything is kept in sync and tracked almost like magic.

Th-th-th-th-that’s all, folks!

I hope you’ve enjoyed this bit of mild debunking. Show your gratitude by buying me a refreshing beverage, or purchase my professional expertise and I’ll answer all of your questions and write all your CI jobs.

22 November, 2024 08:15PM by Matt Palmer (mpalmer@hezmatt.org)

November 20, 2024

Ian Jackson

The Rust Foundation's 2nd bad draft trademark policy

tl;dr: The Rust Foundation’s new trademark policy still forbids unapproved modifications: this would forbid both the Rust Community’s own development work(!) and normal Free Software distribution practices.

Background

In April 2023 I wrote about the Rust Foundation’s ham-fisted and misguided attempts to update the Rust trademark policy. This turned into drama.

The new draft

Recently, the Foundation published a new draft. It’s considerably less bad, but the most serious problem, which I identified last year, remains.

It prevents redistribution of modified versions of Rust, without pre-approval from the Rust Foundation. (Subject to some limited exceptions.) The people who wrote this evidently haven’t realised that distributing modified versions is how free software development works. Ie, the draft Rust trademark policy even forbids making a github branch for an MR to contribute to Rust!

It’s also very likely unacceptable to Debian. Rust is still on track to repeat the Firefox/Iceweasel debacle.

Below is a copy of my formal response to the consultation. The consultation closes at 07:59:00 UTC tomorrow (21st November), ie, at the end of today (Wednesday) US Pacific time, so if you want to reply, do so quickly.

My consultation response

Hi. My name is Ian Jackson. I write as a Rust contributor and as a Debian Developer with first-hand experience of Debian’s approach to trademarks. (But I am not a member of the Debian Rust Packaging Team.)

Your form invites me to state any blocking concerns. I’m afraid I have one:

PROBLEM

The policy on distributing modified versions of Rust (page 4, 8th bullet) is far too restrictive.

PROBLEM - ASPECT 1

On its face the policy forbids making a clone of the Rust repositories on a git forge, and pushing a modified branch there. That is publicly distributing a modified version of Rust.

I.e., the current policy forbids the Rust’s community’s own development workflow!

PROBLEM - ASPECT 2

The policy also does not meet the needs of Software-Freedom-respecting downstreams, including community Linux distributions such as Debian.

There are two scenarios (fuzzy, and overlapping) which provide a convenient framing to discuss this:

Firstly, in practical terms, Debian may need to backport bugfixes, or sometimes other changes. Sometimes Debian will want to pre-apply bugfixes or changes that have been contributed by users, and are intended eventually to go upstream, but are not included upstream in official Rust yet. This is a routine activity for a distribution. The policy, however, forbids it.

Secondly, Debian, as a point of principle, requires the ability to diverge from upstream if and when Debian decides that this is the right choice for Debian’s users. The freedom to modify is a key principle of Free Software. This includes making changes that the upstream project disapproves of. Some examples of this, where Debian has made changes, that upstream do not approve of, have included things like: removing user-tracking code, or disabling obsolescence “timebombs” that stop a particular version working after a certain date.

Overall, while alignment in values between Debian and Rust seems to be very good right now, modifiability it is a matter of non-negotiable principle for Debian. The 8th bullet point on page 4 of the PDF does not give Debian (and Debian’s users) these freedoms.

POSSIBLE SOLUTIONS

Other formulations, or an additional permission, seem like they would be able to meet the needs of both Debian and Rust.

The first thing to recognise is that forbidding modified versions is probably not necessary to prevent language ecosystem fragmentation. Many other programming languages are distributed under fully Free Software licences without such restrictive trademark policies. (For example, Python; I’m sure a thorough survey would find many others.)

The scenario that would be most worrying for Rust would be “embrace - extend - extinguish”. In projects with a copyleft licence, this is not a concern, but Rust is permissively licenced. However, one way to address this would be to add an additional permission for modification that permits distribution of modified versions without permission, but if the modified source code is also provided, under the original Rust licence.

I suggest therefore adding the following 2nd sub-bullet point to the 8th bullet on page 4:

  • changes which are shared, in source code form, with all recipients of the modified software, and publicly licenced under the same licence as the official materials.

This means that downstreams who fear copyleft have the option of taking Rust’s permissive copyright licence at face value, but are limited in the modifications they may make, unless they rename. Conversely downstreams such as Debian who wish to operate as part of the Free Software ecosystem can freely make modifications.

It also, obviously, covers the Rust Community’s own development work.

NON-SOLUTIONS

Some upstreams, faced with this problem, have offered Debian a special permission: ie, said that it would be OK for Debian to make modifications that Debian wants to. But Debian will not accept any Debian-specific permissions.

Debian could of course rename their Rust compiler. Debian has chosen to rename in the past: infamously, a similar policy by Mozilla resulted in Debian distributing Firefox under the name Iceweasel for many years. This is a PR problem for everyone involved, and results in a good deal of technical inconvenience and makework.

“Debian could seek approval for changes, and the Rust Foundation would grant that approval quickly”. This is unworkable on a practical level - requests for permission do not fit into Debian’s workflow, and the resulting delays would be unacceptable. But, more fundamentally, Debian rightly insists that it must have the freedom to make changes that the Foundation do not approve of. (For example, if a future Rust shipped with telemetry features Debian objected to.)

“Debian and Rust could compromise”. However, Debian is an ideological as well as technological project. The principles I have set out are part of Debian’s Foundation Documents - they are core values for Debian. When Debian makes compromises, it does so very slowly and with great deliberation, using its slowest and most heavyweight constitutional governance processes. Debian is not likely to want to engage in such a process for the benefit of one programming language.

“Users will get Rust from upstream”. This is currently often the case. Right now, Rust is moving very quickly, and by Debian standards is very new. As Rust becomes more widely used, more stable, and more part of the infrastructure of the software world, it will need to become part of standard, stable, reliable, software distributions. That means Debian.

(The consultation was a Google Forms page with a single text field, so the formatting isn’t great. I have edited the formatting very lightly to avoid rendering bugs here on my blog.)



comment count unavailable comments

20 November, 2024 12:50PM

Russell Coker

Solving Spam and Phishing for Corporations

Centralisation and Corporations

An advantage of a medium to large company is that it permits specialisation. For example I’m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it’s clear that a good deal of centralisation is appropriate.

For people doing technical computer work such as programming there’s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation.

A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that’s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn’t a solution that’s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it’s still clear that this isn’t a great solution.

Let’s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it’s valid, it’s great that they seek expert advice when they are unsure about things but it would be better if they didn’t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don’t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that’s 10 work hours wasted!

The Rationale for Human Filtering

For personal email human filtering usually isn’t viable because people want privacy. But corporate email isn’t private, it’s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems.

The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it’s legit you have one person who’s good at recognising spam (because it’s their job) who clicks on a “remove mail from this sender from all mailboxes” button and 500 messages are deleted and the sender is blocked.

Delaying email would be a concern. It’s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don’t have notifications turned on for email because it’s a distraction and a fast response isn’t needed. There are a few senders where fast response is required, which is mostly corporations sending a “click this link within 10 minutes to confirm your password change” email. Setting up rules for all such senders that are relevant to work wouldn’t be difficult to do.

How to Solve This

Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn’t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven’t managed to filter out spam/phishing mail effectively so I think we should assume that it’s not going to be solved by filtering. There is talk about what “AI” technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters.

An additional complication for corporate email filtering is that some criteria that are used to filter personal email don’t apply to corporate mail. If someone sends email to me personally about millions of dollars then it’s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that’s impossible to solve, it’s just an extra difficulty that reduces the efficiency of filters.

It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee’s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn’t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs).

For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day’s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list.

The Change

One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases.

But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that’s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn’t be much work involved in maintaining it. Then after that it would be a net win for the company.

The Benefits

If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that’s 10 hours of wasted time per day. Effectively wasting one employee’s work! I’m sure that’s the low end of the range, 5 minutes average per day doesn’t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day.

Then there’s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that’s 300 hours or 7.5 working weeks every time you do it) you just train the few experts.

In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked.

Will They Do It?

They probably won’t do it any time soon. I don’t think it’s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it’s probably regarded as too difficult to change anything and the costs aren’t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

20 November, 2024 05:22AM by etbe

Arnaud Rebillout

Installing an older Ansible version via pipx

Latest Ansible requires Python 3.8 on the remote hosts

... and therefore, hosts running Debian Buster are now unsupported.

Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:

$ ansible --version | head -1
ansible [core 2.18.0]

To my surprise, Ansible started to fail with some remote hosts:

ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]

Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.

How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.

Pipx to the rescue

TL;DR

pipx install --include-deps ansible==10.6.0
pipx inject ansible dnspython    # for community.general.dig

Installing Ansible via pipx

Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.

Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.

First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.

The output should look something like that:

$ pipx install --include-deps ansible==10.6.0
  installed package ansible 10.6.0, installed using Python 3.12.7
  These apps are now globally available
    - ansible
    - ansible-community
    - ansible-config
    - ansible-connection
    - ansible-console
    - ansible-doc
    - ansible-galaxy
    - ansible-inventory
    - ansible-playbook
    - ansible-pull
    - ansible-test
    - ansible-vault
done!  🌟 

Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.

Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi

Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.

Now, let's open a new terminal and check if we're good:

$ which ansible
/home/me/.local/bin/ansible

$ ansible --version | head -1
ansible [core 2.17.6]

Yep! And that's working already, I can use Ansible with Buster hosts again.

What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.

Injecting Python dependencies needed by collections

Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:

# Works with APT-installed Ansible? Yes!
$ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | SUCCESS => {
    "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132"
}

# Works with pipx-installed Ansible? No!
$ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | FAILED! => {
  "msg": "An unhandled exception occurred while running the lookup plugin 'dig'.
  Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig
  lookup requires the python 'dnspython' library and it is not installed."
}

The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:

$ pipx inject ansible dnspython
  injected package dnspython into venv ansible
done!  🌟 

Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.

Closing thoughts

Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.

Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

20 November, 2024 12:00AM by Arnaud Rebillout

November 19, 2024

hackergotchi for Aurelien Jarno

Aurelien Jarno

AI crawlers should be smarter

It would be fantastic if all those AI companies dedicated some time to make their web crawlers smarter (what about using AI?). Noawadays most of them still stupidly follow every link on a Git frontend.

Hint: Changing the display options does not provide more training data!

19 November, 2024 10:31PM by aurel32

Melissa Wen

Display/KMS Meeting at XDC 2024: Detailed Report

XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.

Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.

Short on Time?

  1. Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
  1. For a quick written summary, scroll down to the TL;DR section.

TL;DR

This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):

  • Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
  • Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
  • HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes.
  • Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
  • Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.

Bringing together Linux display developers in the XDC 2024

While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.

Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)

Link: https://indico.freedesktop.org/event/6/contributions/383/

Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.

The final agenda covered five topics in the scheduled order:

  1. How to share drivers between V4L2 and DRM for bridge-like components (new topic);
  2. Real-time Scheduling (problems encountered after the Display Next hackfest);
  3. HDR/Color Management (ofc);
  4. Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
  5. (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).

Unpacking the Topics

Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.

From his notes, let’s dive into the key discussions!

How to share drivers between V4L2 and KMS for bridge-like components.

Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.

  • Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
  • Potential Solutions:
    1. Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
    2. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
    3. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.

Real-Time Scheduling Challenges

We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.

  • Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
  • Action items:
    • Explore alternative backtraces during the busy wait (e.g., ftrace).
    • Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup).

HDR/Color Management

This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.

Here’s a breakdown of the key points raised at this meeting:

  • Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
  • NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs.
  • Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
  • Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)]
  • Compositor Side: The compositor developers have also made significant progress.
    • KDE has already implemented and validated the API through an experimental implementation in Kwin.
    • Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
    • Weston has also begun exploring implementation, and we might see something from them by the end of the year.
  • Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.

Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!

Display Mux

During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.

  • Context:
  • Key Considerations:
    • HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here.
    • Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured?
    • Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
  • Action points:
    • Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
    • Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.

Towards Better Commit Failure Feedback

In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.

  • Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.

To address this issue, we discussed several potential improvements:

  • Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
  • Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
  • Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.

By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.

A Big Thank You!

Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.

This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.

Stay tuned for future updates!

19 November, 2024 01:00PM

November 18, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.2.0-1 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1191 other packages on CRAN, downloaded 37.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 603 times according to Google Scholar.

Conrad released a minor version 14.2.0 a few days ago after we spent about two weeks with several runs of reverse-dependency checks covering corner cases. After a short delay at CRAN due to a false positive on a test, a package failing tests we also failed under the previous version, and some concern over new deprecation warnings _whem using the headers directly as _e.g. mlpack R package does we are now on CRAN. I noticed a missing feature under large ‘64bit word’ (for large floating-point matrices) and added an exporter for icube going to double to support the 64-bit integer range (as we already did, of course, for vectors and matrices). Changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.0-1 (2024-11-16)

  • Upgraded to Armadillo release 14.2.0 (Smooth Caffeine)

    • Faster handling of symmetric matrices by inv() and rcond()

    • Faster handling of hermitian matrices by inv(), rcond(), cond(), pinv(), rank()

    • Added solve_opts::force_sym option to solve() to force the use of the symmetric solver

    • More efficient handling of compound expressions by solve()

  • Added exporter specialisation for icube for the ARMA_64BIT_WORD case

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 November, 2024 10:31PM

hackergotchi for C.J. Collier

C.J. Collier

Managing HPE SAS Controllers

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.

hpacucli is the wrong command. Use ssacli instead.

$ KR='/usr/share/keyrings/hpe.gpg'
$ for fingerprint in \
  882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \
  57446EFDE098E5C934B69C7DC208ADDE26C2B797 \
  476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \
    curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \
      | gpg --no-default-keyring --keyring "${KR}" --import ; \
  done
$ gpg --list-keys --no-default-keyring --keyring "${KR}" 
/usr/share/keyrings/hpe.gpg
---------------------------
pub   rsa2048 2012-12-04 [SC] [expired: 2022-12-02]
      476DADAC9E647EE27453F2A3B070680A5CE2D476
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service)

pub   rsa2048 2014-11-19 [SC] [expired: 2024-11-16]
      882F7199B20F94BD7E3E690EFADD8D64B1275EA3
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1

pub   rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07]
      57446EFDE098E5C934B69C7DC208ADDE26C2B797
uid           [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 
$ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \
  | sudo dd of=/etc/apt/sources.list.d status=none
$ sudo apt-get update
$ sudo apt-get install -y -qq ssacli > /dev/null 2>&1
$ sudo ssacli ctrl all show status

HPE Smart Array P408i-p SR Gen10 in Slot 3
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

$ sudo ssacli ctrl all show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PFJHD0ARCCR1QM
   RAID 6 Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 2.65
   Firmware Supports Online Firmware Activation: True
   Driver Supports Online Firmware Activation: True
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Write Cache Bypass Threshold Size: 1040 KiB
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Configured Drive Write Cache Policy: Disable
   Unconfigured Drive Write Cache Policy: Default
   Total Cache Size: 2.0
   Total Cache Memory Available: 1.8
   Battery Backed Cache Size: 1.8
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Cache Module Temperature (C): 43
   Capacitor Temperature  (C): 40
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: smartpqi
   Driver Version: Linux 2.1.18-045
   PCI Address (Domain:Bus:Device.Function): 0000:11:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: Mixed
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: 2M20040D1Q
   Sanitize Erase Supported: True
   Sanitize Lock: None
   Sensor ID: 0
      Location: Capacitor
      Current Value (C): 40
      Max Value Since Power On: 42
   Sensor ID: 1
      Location: ASIC
      Current Value (C): 53
      Max Value Since Power On: 55
   Sensor ID: 2
      Location: Unknown
      Current Value (C): 43
      Max Value Since Power On: 45
   Sensor ID: 3
      Location: Cache
      Current Value (C): 43
      Max Value Since Power On: 44
   Primary Boot Volume: None
   Secondary Boot Volume: None

$ sudo ssacli ctrl all show config

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

$ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail

HPE Smart Array P408i-p SR Gen10 in Slot 3

   Array A

      physicaldrive 2I:2:7
         Port: 2I
         Box: 2
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 1.2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 10000
         Firmware Revision: U850
         Serial Number: KZGN1BDE
         WWID: 5000CCA01D247239
         Model: HGST    HUC101212CSS600
         Current Temperature (C): 46
         Maximum Temperature (C): 51
         PHY Count: 2
         PHY Transfer Rate: 6.0Gbps, Unknown
         PHY Physical Link Rate: 6.0Gbps, Unknown
         PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps
         Drive Authentication Status: OK
         Carrier Application Version: 11
         Carrier Bootloader Version: 6
         Sanitize Erase Supported: False
         Shingled Magnetic Recording Support: None
         Drive Unique ID: 5000CCA01D247238

18 November, 2024 07:21PM by C.J. Collier

hackergotchi for Philipp Kern

Philipp Kern

debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.

This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.

As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.

The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.

SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

18 November, 2024 04:43PM by Philipp Kern (noreply@blogger.com)

Russ Allbery

Review: Delilah Green Doesn't Care

Review: Delilah Green Doesn't Care, by Ashley Herring Blake

Series: Bright Falls #1
Publisher: Jove
Copyright: February 2022
ISBN: 0-593-33641-0
Format: Kindle
Pages: 374

Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect.

Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes.

Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like.

Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her.

I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down.

It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into.

This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules.

This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate.

The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other.

This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience.

I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth.

Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other.

Followed by Astrid Parker Doesn't Fail.

Rating: 9 out of 10

18 November, 2024 04:20AM

November 17, 2024

Review: Dark Deeds

Review: Dark Deeds, by Michelle Diener

Series: Class 5 #2
Publisher: Eclipse
Copyright: January 2016
ISBN: 0-6454658-4-4
Format: Kindle
Pages: 340

Dark Deeds is the second book of the self-published Class 5 science fiction romance series. It is a sequel to Dark Horse and will spoil the plot of that book, but it follows the romance series convention of switching to a new protagonist in the same universe and telling a loosely-connected story.

Fiona, like Rose in the previous book, was kidnapped by the Tecran in one of their Class 5 ships, although that's not entirely obvious at the start of the story. The book opens with her working as a slave on a Garmman trading ship while its captain works up the nerve to have her killed. She's spared this fate when the ship is raided by Krik pirates. Some brave fast-talking, and a touch of honor among thieves, lets her survive the raid and be rescued by a pursuing Grih battleship, with a useful electronic gadget as a bonus.

The author uses the nickname "Fee" for Fiona throughout this book and it was like nails on a chalkboard every time. I had to complain about that before getting into the review.

If you've read Dark Horse, you know the formula: lone kidnapped human woman, major violations of the laws against mistreatment of sentient beings that have the Grih furious on her behalf, hunky Grih starship captain who looks like a space elf, all the Grih are fascinated by her musical voice, she makes friends with a secret AI... Diener found a formula that worked well enough that she tried it again, and it would not surprise me if the formula repeated through the series. You should not go into this book expecting to be surprised.

That said, the formula did work the first time, and it largely does work again. I thoroughly enjoyed Dark Horse and wanted more, and this is more, delivered on cue. There are worse things, particularly if you're a Kindle Unlimited reader (I am not) and are therefore getting new installments for free. The Tecran fascination with kidnapping human women is explained sufficiently in Fiona's case, but I am mildly curious how Diener will keep justifying it through the rest of the series. (Maybe the formula will change, but I doubt it.)

To give Diener credit, this is not a straight repeat of the first book. Fiona is similar to Rose but not identical; Rose had an unshakable ethical calm, and Fiona is more of a scrapper. The Grih are not stupid and, given the amount of chaos Rose unleashed in the previous book, treat the sudden appearance of another human woman with a great deal more caution and suspicion. Unfortunately, this also means far less of my favorite plot element of the first book: the Grih being constantly scandalized and furious at behavior the protagonist finds sadly unsurprising.

Instead, this book has quite a bit more action. Dark Horse was mostly character interactions and tense negotiations, with most of the action saved for the end. Dark Deeds replaces a lot of the character work with political plots and infiltrating secret military bases and enemy ships. The AI (named Eazi this time) doesn't show up until well into the book and isn't as much of a presence as Sazo. Instead, there's a lot more of Fiona being drafted into other people's fights, which is entertaining enough while it's happening but which wasn't as delightful or memorable as Rose's story.

The writing continues to be serviceable but not great. It's a bit cliched and a bit awkward.

Also, Diener uses paragraph breaks for emphasis.

It's hard to stop noticing it once you see it.

Thankfully, once the story gets going and there's more dialogue, she tones that down, or perhaps I stopped noticing. It's that kind of book (and that kind of series): it's a bit rough to get started, but then there's always something happening, the characters involve a whole lot of wish-fulfillment but are still people I like reading about, and it's the sort of unapologetic "good guys win" type of light science fiction that is just the thing when one simply wants to be entertained. Once I get into the book, it's easy to overlook its shortcomings.

I spent Dark Horse knowing roughly what would happen but wondering about the details. I spent Dark Deeds fairly sure of the details and wondering when they would happen. This wasn't as fun of an experience, but the details were still enjoyable and I don't regret reading it. I am hoping that the next book will be more of a twist, or will have a character more like Rose (or at least a character with a better nickname). Sort of recommended if you liked Dark Horse and really want more of the same.

Followed by Dark Minds, which I have already purchased.

Rating: 6 out of 10

17 November, 2024 05:55AM

November 14, 2024

Reproducible Builds

Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announces it has lost its founding member.

Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. Many of our earliest status reports were written by him and many of our key tools in use today are based on his design.

Lunar was a resolute opponent of surveillance and censorship, and he possessed an unwavering energy that fueled his work on Reproducible Builds and Tor. Without Lunar’s far-sightedness, drive and commitment to enabling teams around him, Reproducible Builds and free software security would not be in the position it is in today. His contributions will not be forgotten, and his high standards and drive will continue to serve as an inspiration to us as well as for the other high-impact projects he was involved in.

Lunar’s creativity, insight and kindness were often noted. He will be greatly missed.


Other tributes:

14 November, 2024 03:00PM

Stefano Zacchiroli

In memory of Lunar

In memory of Lunar

I've had the incredible fortune to share the geek path of Lunar through life on multiple occasions. First, in Debian, beginning some 15+ years ago, where we were fellow developers and participated in many DebConf editions together.

Then, on the deontology committee of Nos Oignons, a non-profit organization initiated by Lunar to operate Tor relays in France. This was with the goal of diversifying relay operators and increasing access to censorship-resistance technology for everyone in the world. It was something truly innovative and unheard of at the time in France.

Later, as a member of the steering committee of Reproducible Builds, a project that Lunar brought to widespread geek popularity with a seminal "Birds of a Feather" session at DebConf13 (and then many other talks with fellow members of the project in the years to come). A decade later, Reproducible Builds is having a major impact throughout the software industry, primarily due to growing fears about the security of the software supply chain.

Finally, we had the opportunity to recruit Lunar a couple of years ago at Software Heritage, where he insisted on working until he was able to, as part of a team he loved, and that loved him back. In addition to his numerous technical contributions to the initiative, he also facilitated our first ever multi-day team seminar. The event was so successful that it has been confirmed as a long-awaited yearly recurrence by all team members.

I fondly remember one of the last conversations I had with Lunar, a few months ago, when he told me how proud he was not only of having started Nos Oignons and contributed to the ignition of Reproducible Builds, but specifically about the fact that both initiatives were now thriving without being dependent on him. He was likely thinking about a future world without him, but also realizing how impactful his activism had been on the past and present world.

Lunar changed the world for the better and left behind a trail of love and fond memories.

Che la terra ti sia lieve, compagno.

--- Zack

14 November, 2024 01:56PM

November 13, 2024

Russell Coker

Modern Sleep

Julius wrote an insightful blog post about the “modern sleep” issue with Windows [1]. Basically Microsoft decided that the right way to run laptops is to never entirely sleep, which uses more battery but gives better options for waking up and doing things. I agree with Microsoft in concept and this is something that is a problem that can be solved. A phone can run for 24+ hours without ever fully sleeping, a laptop has a more power hungry CPU and peripherals but also has a much larger battery so it should be able to do the same. Some of the reviews for Snapdragon Windows laptops claim up to 22 hours of actual work without charging! So having suspend not really stop the system should be fine.

The ability of a phone to never fully sleep is a change in quality of the usage experience, it means that you can access it and immediately have it respond and it means that all manner of services can be checked for new updates which may require a notification to the user. The XMPP protocol (AKA Jabber) was invented in 1999 which was before laptops were common and Instant Message systems were common long before then. But using Jabber or another IM system on a desktop was a very different experience to using it on a laptop and using it on a phone is different again. The “modern sleep” allows laptops to act like phones in regard to such messaging services. Currently I have Matrix IM clients running on my Android phone and Linux laptop, if I get a notification that takes much typing for a response then I get out my laptop to respond. If I had an ARM based laptop that never fully shut down I would have much less need for Matrix on a phone.

Making “modern sleep” popular will lead to more development of OS software to work with it. For Linux this will hopefully mean that regular Linux distributions (as opposed to Android which while running a Linux kernel is very different to Debian etc) get better support for such things and therefore become more usable on phones. Debian on a Librem 5 or PinePhonePro isn’t very usable due to battery life issues.

A laptop with an LTE card can be used for full mobile phone functionality. With “modern sleep” this is a viable option. I am tempted to make a laptop with LTE card and bluetooth headset a replacement for my phone. Some people will say “what if someone tries to call you when it’s not convenient to have your laptop with you”, my response is “what if people learn to not expect me to answer the phone at any time as they managed that in the 90s”. Seriously SMS or Matrix me if you want an instant response and if you want a long chat schedule it via SMS or Matrix.

Dell has some useful advice about how to use their laptops (and probably most laptops from recent times) in this regard [2]. You can’t close the lid before unplugging the power cable you have to unplug first and then close. You shouldn’t put a laptop in a sealed bag for travel either. This is a terrible situation, you can put a tablet in a bag and don’t need to take any special precautions when unplugging and laptops should work the same. The end result of what Microsoft, Dell, Intel, and others are doing will be good but they are making some silly design choices along the way! I blame Intel mostly for selling laptop CPUs with TDPs >40W!

For an amusing take on this Linus Tech Tips has a video about being forced to use MacBooks by Microsoft’s implementation of Modern Sleep [3].

I’ll try out some ARM laptops in the near future and blog about how well they work on Debian.

13 November, 2024 10:10AM by etbe

November 12, 2024

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - November 2024

Our Debian User Group met on November 2nd after a somewhat longer summer hiatus than normal. It was lovely to see a bunch of people again and to be able to dedicate a whole day to hacking :)

Here is what we did:

lavamind:

  • reproduced puppetdb FTBFS #1084038 and reported the issue upstream
  • uploaded a new upstream version for pgpainless (1.6.8-1)
  • uploaded a new revision for ruby-moneta (1.6.0-3)
  • sent an inquiry to the backports team about #1081696

pollo:

  • reviewed & merged many lintian merge requests, clearing out most of the queue
  • uploaded a new lintian release (1.120.0)
  • worked on unblocking the revival of lintian.debian.org (many thanks to anarcat and pkern)
  • apparently (kindly) told people to rtfm at least 4 times :)

anarcat:

LeLutin:

  • opened an RFS on the ruby team mailing list for the new upstream version of ruby-necromancer
  • worked on packaging the new upstream version of ruby-pathspec

tvaz:

  • did AM (Application Manager) work

tassia:

  • explored the Debian Jr. project (website, wiki, mailing list, salsa repositories)
  • played a few games for Nico's entertainment :-)
  • built and tested a Debian Jr. live image

Pictures

This time around, we went back to Foulab. Thanks for hosting us!

As always, the hacklab was full of interesting stuff and I took a few (bad) pictures for this blog post:

Two old video cameras and a 'My First Sony' tape recorder An ALP HT-286 machine with a very large 'turbo' button A New Hampshire 'IPROUTE' vanity license plate

12 November, 2024 11:45PM by Louis-Philippe Véronneau

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Complex for Whom?

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else – maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

12 November, 2024 08:21PM

Sven Hoexter

fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations.

Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst.

First Iteration: Just Run kustomize Like Flux Would Do It

With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    popd
done

Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml

Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work.

Thus we started to catch that as well in our growing for loop:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f ${CLFOLDER}/kustomization.yaml && continue
        test -f ${CLFOLDER}/kustomization.yml && continue
        if [[ $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f|wc -l) != 0 ]]; then
            echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
        fi
    done

    popd
done

Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

12 November, 2024 02:19PM

hackergotchi for James Bromberger

James Bromberger

My own little server

In 2004, I was living in London, and decided it was time I had my own little virtual private server somewhere online. As a Debian developer since the start of 2000, it had to be Debian, and it still is… This was before “cloud” as we know it today. Virtual Private Servers (VPS) was a … Continue reading "My own little server"

12 November, 2024 12:34PM by james

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, October 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In October, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 6.0h (out of 7.0h assigned and 7.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 15.0h (out of 87.0h assigned and 13.0h from previous period), thus carrying over 85.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 10.0h assigned).
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 4.0h (out of 0.0h assigned and 4.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 29.0h (out of 26.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 60.0h (out of 23.5h assigned and 36.5h from previous period).
  • Guilhem Moulin did 7.5h (out of 19.75h assigned and 0.25h from previous period), thus carrying over 12.5h to the next month.
  • Lee Garrett did 15.25h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 44.75h to the next month.
  • Lucas Kanashiro did 10.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. Sánchez did 9.75h (out of 24.0h assigned), thus carrying over 14.25h to the next month.
  • Santiago Ruano Rincón did 23.5h (out of 25.0h assigned), thus carrying over 1.5h to the next month.
  • Sean Whitton did 6.25h (out of 1.0h assigned and 5.25h from previous period).
  • Stefano Rivera did 1.0h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.0h to the next month.
  • Sylvain Beucler did 9.5h (out of 16.0h assigned and 44.0h from previous period), thus carrying over 50.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 10.5h (out of 12.0h assigned), thus carrying over 1.5h to the next month.

Evolution of the situation

In October, we have released 35 DLAs.

Some notable updates prepared in October include denial of service vulnerability fixes in nss, regression fixes in apache2, multiple fixes in php7.4, and new upstream releases of firefox-esr, openjdk-17, and opendk-11.

Additional contributions were made for the stable Debian 12 bookworm release by several LTS contributors. Arturo Borrero Gonzalez prepared a parallel update of nss, Bastien Roucariès prepared a parallel update of apache2, and Santiago Ruano Rincón prepared updates of activemq for both LTS and Debian stable.

LTS contributor Bastien Roucariès undertook a code audit of the cacti package and in the process discovered three new issues in node-dompurity, which were reported upstream and resulted in the assignment of three new CVEs.

As always, the LTS team continues to work towards improving the overall sustainability of the free software base upon which Debian LTS is built. We thank our many committed sponsors for their ongoing support.

Thanks to our sponsors

Sponsors that joined recently are in bold.

12 November, 2024 12:00AM by Roberto C. Sánchez

November 11, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)

  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 November, 2024 05:47PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

11 November, 2024 02:53PM

Vincent Bernat

Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }:
pkgs.stdenv.mkDerivation {
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
}

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }:
stdenv.mkDerivation rec {
  pname = "hello";
  version = "2.12.1";
  src = fetchurl {
    url = "mirror://gnu/hello/hello-${version}.tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
  };
}

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec {
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v${version} --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';

  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
}

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: {
  src = pkgs.stdenvNoCC.mkDerivation { /* ... */ };
  vendorHash = null;
  subPackages = [ "." ];
});

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
  };
  outputs = { self, nixpkgs, flake-utils, caddy }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
          inherit system;
          overlays = [ caddy.overlays.default ];
        };
      in
      {
        packages = {
          default = pkgs.caddy.withPlugins {
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
          };
        };
      });
}

Update (2024-11)

This flake won’t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different and I have proposed it in another pull request↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

11 November, 2024 07:35AM by Vincent Bernat

November 10, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)

  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 November, 2024 07:29PM

Reproducible Builds

Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 November, 2024 06:26PM

Thorsten Alteholz

My Debian Activities in October 2024

FTP master

This month I accepted 398 and rejected 22 packages. The overall number of packages that got accepted was 441.

In case your RM bug is not closed within a month, you can assume that either the conversion of the subject of the bug email to the corresponding dak command did not work or you still need to take care of reverse dependencies. The dak command related to your removal bug can be found here.

Unfortunately the bahavior of some project members caused a decline of motivation of team members to work on these bugs. When I look at these bugs, I just copy and paste the above mentioned dak commands. If they don’t work, I don’t have the time to debug what is going wrong. So please read the docs and take care of it yourself. Please also keep in mind that you need to close the bug or set a moreinfo tag if you don’t want anybody to act on your removal bug.

Debian LTS

This was my hundred-twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3925-1] asterisk security update to fix two CVEs related to privilege escalation and DoS
  • [DLA 3940-1] xorg-server update to fix one CVE related to privilege escalation

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1198-1]cups security update for one CVE in Buster to fix the IPP attribute related CVEs.
  • [ELA-1199-1]cups security update for two CVEs in Stretch to fix the IPP attribute related CVEs
  • [ELA-1216-1]graphicsmagick security update for one CVE in Jessie
  • [ELA-1217-1]asterisk security update for two CVEs in Buster related to privilege escalation
  • [ELA-1218-1]asterisk security update for two CVEs in Stretch related to privilege escalation and DoS
  • [ELA-1223-1]xorg-server security update for one CVE in Jessie, Stretch and Buster related to privilege escalation

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

Unfortunately I didn’t found any time to work on this topic.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

  • pywws (yes, again this month)

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

10 November, 2024 12:26AM by alteholz

November 09, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog.

It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page.

I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx.

Comment posting workflow

I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:

  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.

  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.

  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.

  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.

First step: fetching a comment form

First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:

hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.

issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form

hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)

Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code!

Second step: handling previews

The old Preview Comment page

The old Preview Comment page

In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later.

Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview.

IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment.

The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it.

Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview.

Third step: handling submitted comments

IkiWiki is highly configurable, and many different things could happen once you post a comment.

On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments.

I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat.

handling moderated comments

Moderation message upon submitting a comment

Moderation message upon submitting a comment

One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead.

I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js.

Summary

I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience.

You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is.

Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

09 November, 2024 09:16PM

November 08, 2024

hackergotchi for Thomas Lange

Thomas Lange

Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques.

I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999.

In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024.

I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords.

Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps.

But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program.

On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:

PASSWD      = /etc/passwd
SHADOW      = /etc/shadow

Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps.

On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:

passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis

You can remove all occurences of "nis" in your /etc/pam.d/common-password file.

Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map.

Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:

#! /bin/sh

PATH=/usr/sbin:/usr/bin

# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi

cd /var/yp || exit 3
sleep 2
make

Then install the package incron.

# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e

Add this line:

/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%

It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed.

You can see the logs from incrond by using:

# journalctl _COMM=incrond
e.g.

Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)

I've disabled the execution of yppasswd using dpkg-divert

# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable

Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets.

I've also discovered the package pamtester, which is a nice package for testing your pam configs.

08 November, 2024 12:32PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: October’s report (by Anupa Ann Joseph)

Debian Contributions: 2024-10

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

rebootstrap, by Helmut Grohne

After significant changes earlier this year, the state of architecture cross bootstrap is normalizing again. More and more architectures manage to complete rebootstrap testing successfully again. Here are two examples of what kind of issues the bootstrap testing identifies.

At some point, libpng1.6 would fail to cross build on musl architectures whereas it would succeed on other ones failing to locate zlib. Adding --debug-find to the cmake invocation eventually revealed that it would fail to search in /usr/lib/<triplet>, which is the default library path. This turned out to be a bug in cmake assuming that all linux systems use glibc. libpng1.6 also gained a baseline violation for powerpc and ppc64 by enabling the use of AltiVec there.

The newt package would fail to cross build for many 32-bit architectures whereas it would succeed for armel and armhf due to -Wincompatible-pointer-types. It turns out that this flag was turned into -Werror and it was compiling with a warning earlier. The actual problem is a difference in signedness between wchar_t and FriBidChar (aka uint32_t) and actually affects native building on i386.

Miscellaneous contributions

  • Helmut sent 35 patches for cross build failures.
  • Stefano Rivera uploaded the Python 3.13.0 final release.
  • Stefano continued to rebuild Python packages with C extensions using Python 3.13, to catch compatibility issues before the 3.13-add transition starts.
  • Stefano uploaded new versions of a handful of Python packages, including: dh-python, objgraph, python-mitogen, python-truststore, and python-virtualenv.
  • Stefano packaged a new release of mkdocs-macros-plugin, which required packaging a new Python package for Debian, python-super-collections (now in NEW review).
  • Stefano helped the mini-DebConf Online Brazil get video infrastructure up and running for the event. Unfortunately, Debian’s online-DebConf setup has bitrotted over the last couple of years, and it eventually required new temporary Jitsi and Jibri instances.
  • Colin Watson fixed a number of autopkgtest failures to get ansible back into testing.
  • Colin fixed an ssh client failure in certain cases when using GSS-API key exchange, and added an integration test to ensure this doesn’t regress in future.
  • Colin worked on the Python 3.13 transition, fixing problems related to it in 15 packages. This included upstream work in a number of packages (postgresfixture, python-asyncssh, python-wadllib).
  • Colin upgraded 41 Python packages to new upstream versions.
  • Carles improved po-debconf-manager: now it can create merge requests to Salsa automatically (created 17, new batch coming this month), imported almost all the packages with debconf translation templates whose VCS is Salsa (currently 449 imported), added statistics per package and language, improved command line interface options. Performed user support fixing different issues. Also prepared an abstract for the talk at MiniDebConf Toulouse.
  • Santiago Ruano Rincón continued the organization work for the DebConf 25 conference, to be held in Brest, France. Part of the work relates to the initial edits of the sponsoring brochure. Thanks to Benjamin Somers who finalized the French and English versions.
  • Raphaël forwarded a couple of zim and hamster bugs to the upstream developers, and tried to diagnose a delayed startup of gdm on his laptop (cf #1085633).
  • On behalf of the Debian Publicity Team, Anupa interviewed 7 women from the Debian community, old and new contributors. The interview was published in Bits from Debian.

08 November, 2024 12:00AM by Anupa Ann Joseph

Reproducible Builds (diffoscope)

diffoscope 283 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 283. This version includes the following changes:

[ Martin Abente Lahaye ]
* Fix crash when objdump is missing when checking .EFI files.

You find out more by visiting the project homepage.

08 November, 2024 12:00AM

November 07, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

John Carpenter's "The Fog"

'The Fog' 7 inch vinyl record

A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.

07 November, 2024 09:51AM

November 06, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is Bits from DPL for October. In addition to a summary of my recent activities, I aim to include newsworthy developments within Debian that might be of interest to the broader community. I believe this provides valuable insights and foster a sense of connection across our diverse projects. Also, I welcome your feedback on the format and focus of these Bits, as community input helps shape their value.

Ada Lovelace Day 2024

As outlined in my platform, I'm committed to increasing the diversity of Debian developers. I hope the recent article celebrating Ada Lovelace Day 2024–featuring interviews with women in Debian–will serve as an inspiring motivation for more women to join our community.

MiniDebConf Cambridge

This was my first time attending the MiniDebConf in Cambridge, hosted at the ARM building. I thoroughly enjoyed the welcoming atmosphere of both MiniDebCamp and MiniDebConf. It was wonderful to reconnect with people who hadn't made it to the last two DebConfs, and, as always, there was plenty of hacking, insightful discussions, and valuable learning.

If you missed the recent MiniDebConf, there's a great opportunity to attend the next one in Toulouse. It was recently decided to include a MiniDebCamp beforehand as well.

FTPmaster accepts MRs for DAK

At the recent MiniDebConf in Cambridge, I discussed potential enhancements for DAK to make life easier for both FTP Team members and developers. For those interested, the document "Hacking on DAK" provides guidance on setting up a local DAK instance and developing patches, which can be submitted as MRs.

As a perfectly random example of such improvements some older MR, "Add commands to accept/reject updates from a policy queue" might give you some inspiration.

At MiniDebConf, we compiled an initial list of features that could benefit both the FTP Team and the developer community. While I had preliminary discussions with the FTP Team about these items, not all ideas had consensus. I aim to open a detailed, public discussion to gather broader feedback and reach a consensus on which features to prioritize.

  • Accept+Bug report

Sometimes, packages are rejected not because of DFSG-incompatible licenses but due to other issues that could be resolved within an existing package (as discussed in my DebConf23 BoF, "Chatting with ftpmasters"[1]). During the "Meet the ftpteam" BoF (Log/transcription of the BoF can be found here), for the moment until the MR gets accepted, a new option was proposed for FTP Team members reviewing packages in NEW:

Accept + Bug Report

This option would allow a package to enter Debian (in unstable or experimental) with an automatically filed RC bug report. The RC bug would prevent the package from migrating to testing until the issues are addressed. To ensure compatibility with the BTS, which only accepts bug reports for existing packages, a delayed job (24 hours post-acceptance) would file the bug.

  • Binary name changes - for instance if done to experimental not via new

When binary package names change, currently the package must go through the NEW queue, which can delay the availability of updated libraries. Allowing such packages to bypass the queue could expedite this process. A configuration option to enable this bypass specifically for uploads to experimental may be useful, as it avoids requiring additional technical review for experimental uploads.

Previously, I believed the requirement for binary name changes to pass through NEW was due to a missing feature in DAK, possibly addressable via an MR. However, in discussions with the FTP Team, I learned this is a matter of team policy rather than technical limitation. I haven't found this policy documented, so it may be worth having a community discussion to clarify and reach consensus on how we want to handle binary name changes to get the MR sensibly designed.

  • Remove dependency tree

When a developer requests the removal of a package – whether entirely or for specific architectures – RM bugs must be filed for the package itself as well as for each package depending on it. It would be beneficial if the dependency tree could be automatically resolved, allowing either:

a) the DAK removal tooling to remove the entire dependency tree
   after prompting the bug report author for confirmation, or

b) the system to auto-generate corresponding bug reports for all
   packages in the dependency tree.

The latter option might be better suited for implementation in an MR for reportbug. However, given the possibility of large-scale removals (for example, targeting specific architectures), having appropriate tooling for this would be very beneficial.

In my opinion the proposed DAK enhancements aim to support both FTP Team members and uploading developers. I'd be very pleased if these ideas spark constructive discussion and inspire volunteers to start working on them--possibly even preparing to join the FTP Team.

On the topic of ftpmasters: an ongoing discussion with SPI lawyers is currently reviewing the non-US agreement established 22 years ago. Ideally, this review will lead to a streamlined workflow for ftpmasters, removing certain hurdles that were originally put in place due to legal requirements, which were updated in 2021.

Contacting teams

My outreach efforts to Debian teams have slowed somewhat recently. However, I want to emphasize that anyone from a packaging team is more than welcome to reach out to me directly. My outreach emails aren't following any specific orders--just my own somewhat naïve view of Debian, which I'm eager to make more informed.

Recently, I received two very informative responses: one from the Qt/KDE Team, which thoughtfully compiled input from several team members into a shared document. The other was from the Rust Team, where I received three quick, helpful replies–one of which included an invitation to their upcoming team meeting.

Interesting readings on our mailing lists

I consider the following threads on our mailing list some interesting reading and would like to add some comments.

Sensible languages for younger contributors

Though the discussion on debian-devel about programming languages took place in September, I recently caught up with it. I strongly believe Debian must continue evolving to stay relevant for the future.

"Everything must change, so that everything can stay the same." -- Giuseppe Tomasi di Lampedusa, The Leopard

I encourage constructive discussions on integrating programming languages in our toolchain that support this evolution.

Concerns regarding the "Open Source AI Definition"

A recent thread on the debian-project list discussed the "Open Source AI Definition". This topic will impact Debian in the future, and we need to reach an informed decision. I'd be glad to see more perspectives in the discussions−particularly on finding a sensible consensus, understanding how FTP Team members view their delegated role, and considering whether their delegation might need adjustments for clarity on this issue.

Kind regards Andreas.

06 November, 2024 11:00PM by Andreas Tille

hackergotchi for Daniel Lange

Daniel Lange

Weird times ... or how the New York DEC decided the US presidential elections

November 2024 will be known as the time when killing peanut, a pet squirrel, by the New York State DEC swung the US presidential elections and shaped history forever.

The hundreds of millions of dollars spent on each side, the tireless campaigning by the candidates, the celebrity endorsements ... all made for an open race for months. Investments evened each other out.

But an OnlyFans producer showing people an overreaching, bureaucracy driven State raiding his home to confiscate a pet squirrel and kill it ... swung enough voters to decide the elections.

That is what we need to understand in times of instant worldwide publication and a mostly attention driven economy: Human fates, elections, economic cycles and wars can be decided by people killing squirrels.

RIP, peanut.

P.S.: Trump Media & Technology Group Corp. (DJT) stock is up 30% pre-market.

06 November, 2024 09:15AM by Daniel Lange

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Making America Great Again

Making America Great Again

Justice For Peanut

Some interesting takeaways (With the caveat that exit polls are not completely accurate and we won't have the full picture for days.)

  • President Trump seems to have won the popular vote which no Republican has done I believe since Reagan.

  • Apparently women didn't particularly care about abortion (CNN said only 14% considered it their primary issue) There is a noticable divide but it is single versus married not women versus men per se.

  • Hispanics who are here legally voted against Hispanics coming here illegally. Latinx's didn't vote for anything because they don't exist.

  • The infamous MSG rally joke had no effect on the voting habits of Puerto Ricans.

  • Republicans have taken the Senate and if trends continue as they are will retain control of the House of Representatives.

  • President Biden may have actually been a better candidate than Border Czar Harris.

06 November, 2024 07:11AM