The diversity of Linux distributions is one of its strengths, but it can also be challenging for app and game development. Where do we need more standards? For example, package management, graphics APIs, or other aspects of the ecosystem? Would such increased standards encourage broader adoption of the Linux ecosystem by developers?

  • TrivialBetaState@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    9 hours ago

    While all areas could benefit in terms of stability and ease of development from standadization, the whole system and each area would suffer in terms of creativity. There needs to be a balance. However, if I had to choose one thing, I’d say the package management. At the moment we have deb, rpm, pacman, flatpak, snap (the latter probably should not be considered as the server side is proprietary) and more from some niche distros. This makes is very difficult for small developers to offer their work to all/most users. Otherwise, I think it is a blessing having so many DEs, APIs, etc.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    20 hours ago

    Where app data is stored.

    ~/.local

    ~/.config

    ~/.var

    ~/.appname

    Sometimes more than one place for the same program

    Pick one and stop cluttering my home directory

    • rice@lemmy.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      Yea I like how a lot have moved to using .config but mozilla just moved out of there and now has a .mozilla folder outside of it… wtf… It is insanely sad.

      I have actually moved my entire “user home folder”… folders out of there just because it is so ugly and unorganized. I now use /home/user/userfolders/… all my stuff like documents / videos etc in here

    • itslilith@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      20 hours ago

      it’s pretty bad. steam for example has both
      ~/.steam and
      ~/.local/share/Steam
      for some reason. I’m just happy I moved to an impermanent setup for my PC, so I don’t need to worry something I temporarily install is going to clutter my home directory with garbage

      • rice@lemmy.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        that .steam is a bunch of symlinks to the .local one… which makes it even worse. they have also .steampid and .steampath.

        and even worse a bunch of games are starting to add them there too.

          • rice@lemmy.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            damn, of all the people you’d think those guys would actually have used the .local or .config =[

            I have 73 dot files in my home directory lmao

    • Tlaloc_Temporal@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      This would also be nice for atomic distros, application space and system space could be separated in more cases.

    • arsCynic@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      This would be convenient indeed, but I’ve learned to be indifferent about it as long as the manual or readme provides helpful and succinct information.

  • JuxtaposedJaguar@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    13 hours ago

    Each monitor should have its own framebuffer device rather than only one app controlling all monitors at any time and needing each app to implement its own multi-monitor support. I know fbdev is an inefficient, un-accelerated wrapper of the DRI, but it’s so easy to use!

    Want to draw something on a particular monitor? Write to its framebuffer file. Want to run multiple apps on multiple screens without needing your DE to launch everything? Give each app write access to a single fbdev. Want multi-seat support without needing multiple GPUs? Same thing.

    Right now, each GPU only gets 1 fbdev and it has the resolution of the smallest monitor plugged into that GPU. Its contents are then mirrored to every monitor, even though they all have their own framebuffers on a hardware level.

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    17 hours ago

    I’m not sure whether this should be a “standard”, but we need a Linux Distribution where the user never has to touch the command line. Such a distro would be beneficial and useful to new users, who don’t want to learn about command line commands.

    And also we need a good app store where users can download and install software in a reasonably safe and easy way.

    • RawrGuthlaf@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      I really don’t understand this. I put a fairly popular Linux distro on my son’s computer and never needed to touch the command line. I update it by command line only because I think it’s easier.

      Sure, you may run into driver scenarios or things like that from time to time, but using supported hardware would never present that issue. And Windows has just as many random “gotchas”.

      • lumony@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I try to avoid using the command line as much as possible, but it still crops up from time to time.

        Back when I used windows, I would legitimately never touch the command line. I wouldn’t even know how to interact with it.

        We’re not quite there with Linux, but we’re getting closer!

      • ChapulinColorado@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 hours ago

        Mint is pretty good, but I found the update center GUI app to always fail to update things like Firefox with some mirror error (regardless of whether you told it to use it or not). It happened for my old desktop (now my dad’s main computer), my LG laptop or used HP elitedesk G4. Using “sudo apt update” + “sudo apt upgrade” + Y (to confirm) on the command line was 10x easier and just worked. I do feel better/safe now that they use Linux for internet browsing instead of windows too.

    • AugustWest@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      14 hours ago

      Why do people keep saying this? If you don’t want to use the command line then don’t.

      But there is no good reason to say people shouldn’t. It’s always the best way to get across what needs to be done and have the person execute it.

      The fedora laptop I have been using for the past year has never needed the command line.

      On my desktop I use arch. I use the command line because I know it and it makes sense.

      Its sad people see it as a negative when it is really useful. But as of today you can get by without it.

      • lumony@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 hours ago

        It’s always the best way to get across what needs to be done and have the person execute it.

        Sigh. If you want to use the command line, great. Nobody is stopping you.

        For those of us who don’t want to use the command line (most regular users) there should be an option not to, even in Linux.

        Its sad people see it as a negative when it is really useful.

        It’s even sadder seeing people lose sight of their humanity when praising the command line while ignoring all of its negatives.

        • AugustWest@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          2 hours ago

          lose sight of their humanity

          Ok this is now a stupid conversation. Really? Humanity?

          Look, you can either follow a flowchart of a dozen different things to click on to get information about your thunderbolt device or type boltctl -list

          Do you want me to create screen shots of every step of the way to use a gui or just type 12 characters? That is why it is useful. It is easy to explain, easy to ask someone to do it. Then they can copy and paste a response, instead of yet another screenshot.

          Next thing you know you will be telling me it is against humanity to “right click”. Or maybe we all should just get a Mac Book Wheel

          Look, I am only advocating that it is a very useful tool. There is nothing “bad” about it, or even hard. What is the negative?

          But I also said, I have been using a Fedora laptop for over a year and guess what? I never needed the command line. Not once.

          • lumony@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            60 minutes ago

            Ok this is now a stupid conversation. Really? Humanity?

            Yeah, humanity. The fact you think it’s ‘stupid’ really just proves my point that you’re too far gone.

            or type boltctl -list

            Really? You have every command memorized? You never need to look any of them up? No copy-pasting!

            Come on, at least try to make a decent argument to avoid looking like a troll.

            I’m glad rational people have won out and your rhetoric is falling further and further by the wayside. The command line is great for development and developers. It’s awful for regular users which is why regular users never touch it.

            You lost sight of your humanity, which is why you don’t even think about how asinine it is to say “just type this command!” as though people are supposed to know it intuitively.

            Gonna block ya now. Arguing with people like you is tiresome and a waste of time.

            Have fun writing commands. Make sure you don’t use a GUI to look them up, or else you’d be proving me right.

            • AugustWest@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              29 minutes ago

              You blocked me over a difference of opinion?

              Wow.

              All I am trying to say it that it is a tool in the toolbox. Telling people Linux needs it is not true, telling people it’s bad is not true.

              Quit trying to make it a negative. I would encourage anyone to explore how to use this tool. And when trying to communicate ideas on the internet it is a very useful one.

              I have never blocked anyone, I find that so strange. It’s like saying because of our difference on this issue, we could never have common ground on any other.

              And you ask me to remember my humanity?

    • Ferk@lemmy.ml
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      20 hours ago

      interoperability == API standardization == API homogeneity

      standardization != monopolization

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 day ago

    Small thing about filesystem dialogs. In file open/save dialogs some apps group directories at the top and others mix them in alphabetically with files. My preference is for them to be grouped, but being consistent either way would be nice.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    2 days ago

    Domain authentication and group policy analogs. Honestly, I think it’s the major reason it isn’t used as a workstation OS when it’s inherently more suited for it than Windows in most office/gov environments. But if IT can’t centrally managed it like you can with Windows, it’s not going to gain traction.

    Linux in server farms is a different beast to IT. They don’t have to deal with users on that side, just admins.

    • lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      20 hours ago

      An immutable distro would be ideal for this kind of thing. ChromeOS (an immutable distro example) can be centrally managed, but the caveat with ChromeOS in particular is that it’s management can only go through Google via their enterprise Google Workspace suite.

      But as a concept, this shows that it’s doable.

      • silly goose meekah@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        19 hours ago

        I don’t think anyone was saying it’s impossible, just that it needs standardization. I imagine windows is more appealing to companies when it is easier to find admins than if they were to use some specific linux system where only a few people are skilled to manage it.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 hours ago

      i’ve never understood why there’s not a good option for using one of the plethora of server management tools with prebuilt helpers for workstations to mimic group policy

      like the tools we have on linux to handle this are far, far more powerful

    • fxdave@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      23 hours ago

      I’ve never understood putting arbitrary limits on a company laptop. I had always been seeking for ways to hijack them. Once I ended up using a VM, without limit…

      • lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        20 hours ago

        TL;DR - Because people are stupid.

        One of my coworkers (older guy) tends to click on things without thinking. He’s been through multiple cyber security training courses, and has even been written up for opening multiple obvious phishing emails.

        People like that are why company-owned laptops are locked down with group policy and other security measures.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 hours ago

        I mean, it sucks, but the stupid shit people will do with company laptops…

  • Mio@feddit.nu
    link
    fedilink
    arrow-up
    30
    arrow-down
    3
    ·
    2 days ago

    Configuration gui standard. Usually there is a config file that I am suppose to edit as root and usually done in the terminal.

    There should be a general gui tool that read those files and obey another file with the rules. Lets say it is if you enable this feature then you can’t have this on at the same time. Or the number has to be between 1 and 5. Not more or less on the number. Basic validation. And run the program with --validation to let itself decide if it looks good or not.

    • lumony@lemmings.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Fuckin hate having to go through config files to change settings…

      It’s always great when settings are easily accessible in a GUI, though! Mad props to the great developers that include them!

      • Einar@lemm.eeOP
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        2 days ago

        I agree. OpenSuse should set the standards in this.

        Tbf, they really need a designer to upgrade this visually a bit. It exudes its strong “Sys Admin only” vibes a bit much. In my opinion. 🙂

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    44
    ·
    2 days ago

    ARM support. Every SoC is a new horror.

    Armbian does great work, but if you want another distro you’re gonna have to go on a lil adventure.

  • dosse91@lemmy.trippy.pizza
    link
    fedilink
    arrow-up
    74
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Generally speaking, Linux needs better binary compatibility.

    Currently, if you compile something, it’s usually dynamically linked against dozens of libraries that are present on your system, but if you give the executable to someone else with a different distro, they may not have those libraries or their version may be too old or incompatible.

    Statically linking programs is often impossible and generally discouraged, making software distribution a nightmare. Flatpak and similar systems made things easier, but it’s such a crap solution and basically involves having an entire separate OS installed in parallel, with its own problems like having a version of Mesa that’s too old for a new GPU and stuff like that. Applications must be able to be packaged with everything they need with them, there is no reason for dynamic linking to be so important in Linux these days.

    I’m not in favor of proprietary software, but better binary compatibility is a necessity for Linux to succeed, and I’m saying this as someone who’s been using Linux for over a decade and who refuses to install any proprietary software. Sometimes I find myself using apps and games in Wine even when a native version is available just to avoid the hassle of having to find and probably compile libobsoletecrap-5.so

    • lumony@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Static linking is a good thing and should be respected as such for programs we don’t expect to be updated constantly.

      • lumony@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        That’s a fair disagreement to have, and a sign that you’re fighting bigger battles than just getting software to work.

        Static linking really is only an issue for proprietary software. Free software will always give users the option to fix programs that break due to updated dependencies.

    • pr06lefs@lemmy.ml
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      2 days ago

      nix can deal with this kind of problem. Does take disk space if you’re going to have radically different deps for different apps. But you can 100% install firefox from 4 years ago and new firefox on the same system and they each have the deps they need.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      I don’t think static linking is that difficult. But for sure it’s discouraged, because I can’t easily replace a statically-linked library, in case of vulnerabilities, for example.

      You can always bundle the dynamic libs in your package and put the whole thing under /opt, if you don’t play well with others.

    • MyNameIsRichard@lemmy.ml
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      2 days ago

      You’ll never get perfect binary compatibility because different distros use different versions of libraries. Consider Debian and Arch which are at the opposite ends of the scale.

      • 2xsaiko@discuss.tchncs.de
        link
        fedilink
        arrow-up
        30
        arrow-down
        2
        ·
        2 days ago

        And yet, ancient Windows binaries will still (mostly) run and macOS allows you to compile for older system version compatibility level to some extent (something glibc alone desperately needs!). This is definitely a solvable problem.

        Linus keeps saying “you never break userspace” wrt the kernel, but userspace breaks userspace all the time and all people say is that there’s no other way.

        • Magiilaro@feddit.org
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          It works under Windows because the windows binaries come with all their dependency .dll (and/or they need some ancient visual runtime installed).

          This is more or less the Flatpack way, with bundling all dependencies into the package

          Just use Linux the Linux way and install your program via the package manager (including Flatpack) and let that handle the dependencies.

          I run Linux for over 25 years now and had maybe a handful cases where the Userland did break and that was because I didn’t followed what I was told during package upgrade.

          The amount of time that I had to get out of .dll-hell on Windows on the other hand. The Linux way is better and way more stable.

          • 2xsaiko@discuss.tchncs.de
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            2 days ago

            I’m primarily talking about Win32 API when I talk about Windows, and for Mac primarily Foundation/AppKit (Cocoa) and other system frameworks. What third-party libraries do or don’t do is their own thing.

            There’s also nothing wrong with bundling specialized dependencies in principle if you provide precompiled binaries. If it’s shipped via the system package manager, that can manage the library versions and in fact it should do that as far as possible. Where this does become a problem is when you start shipping stuff like entire GUI toolkits (hello bundled Qt which breaks Plasma’s style plugins every time because those are not ABI-compatible either).

            The amount of time that I had to get out of .dll-hell on Windows on the other hand. The Linux way is better and way more stable.

            Try running an old precompiled Linux game (say Unreal Tournament 2004 for example). They can be a pain to get working. This is not just some “ooooh gotcha” case, this is an important thing that’s missing for software preservation and cross-compatibility, because not everything can be compiled from source by distro packagers, and not every unmaintained open-source software can be compiled on modern systems (and porting it might not be easy because of the same problem).

            I suppose what Linux is severely lacking is a comprehensive upwards-compatible system API (such as Win32 or Cocoa) which reduces the churn between distros and between version releases. Something that is more than just libc.

            We could maybe have had this with GNUstep, for example (and it would have solved a bunch of other stuff too). But it looks like nobody cares about GNUstep and instead it seems like people are more interested in sidestepping the problem with questionably designed systems like Flatpak.

            • navordar@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              11 hours ago

              There was the Linux Standard Base project, but there were multiple issues with it and finally it got abandoned. Some distributions still have a /etc/lsb-release file for compatibility.

            • Magiilaro@feddit.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 day ago

              Unreal Tournament 2004 depends on SDL 1.3 when I recall correctly, and SDL is neither on Linux nor on any other OS a core system library.

              Binary only programs are foreign to Linux, so yes you will get issues with integrating them. Linux works best when everyone plays by the same rules and for Linux that means sources available.

              Linux in its core is highly modifiable, besides the Kernel (and nowadays maybe systemd), there is no core system that could be used to define a API against. Linux on a Home theater PC has a different system then Linux on a Server then Linux on a gaming PC then Linux on a smartphone.

              You can boot the Kernel and a tiny shell as init and have a valid, but very limited, Linux system.

              Linux has its own set of rules and his own way to do things and trying to force it to be something else can not and will not work.

        • MyNameIsRichard@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          2 days ago

          The difference is that most of your software is built for your distribution, the only exception being some proprietary shit that says it supports Linux, but in reality only supports Ubuntu. That’s my pet peeve just so that you know!

          • 2xsaiko@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            Distributions are not the problem. Most just package upstream libraries as-is (plus/minus some security patches). Hence why programs built for another distro will a lot of the time just run as is on a contemporary distro given the necessary dependencies are installed, perhaps with some patching of the library paths (plenty of packages in nixpkgs which just use precompiled deb packages as a source, as an extreme example because nixpkgs has a very different file layout).

            Try a binary built for an old enough Ubuntu version on a new Ubuntu version however…

    • CarrotsHaveEars@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      2 days ago

      What you described as the weakness, is actually what is strong of an open source system. If you compile a binary for a certain system, say Debian 10, and distribute the binary to someone who is also running a Debian 10 system, it is going to work flawlessly, and without overhead because the target system could get the dependency on their own.

      The lack of ability to run a binary which is for a different system, say Alpine, is as bad as those situations when you say you can’t run a Windows 10 binary on Windows 98. Alpine to Debian, is on the same level of that 10 to 98, they are practically different systems, only marked behind the same flag.

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        The thing is, everyone would agree that it’s a strength, if the Debian-specific format was provided in addition to a format which runs on all Linux distros. When I’m not on Debian, I just don’t get anything out of that…

    • iii@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      I think webassembly will come out on top as preferred runtime because of this, and the sandboxing.

  • irotsoma@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    Not offering a solution here exactly, but as a software engineer and architect, this is not a Linux only problem. This problem exists across all software. There are very few applications that are fully self contained these days because it’s too complex to build everything from scratch every time. And a lot of software depends on the way that some poorly documented feature worked at the time that was actually a bug and was eventually fixed and then breaks the applications that depended on it, etc. Also, any time improvements are made in a library application it has potential to break your application, and most developers don’t get time to test the every newer version.

    The real solution would be better CI/CD build systems that automatically test the applications with newer versions of libraries and report dependencies better. But so many applications are short on automated unit and integration tests because it’s tedious and so many companies and younger developers consider it a waste of time/money. So it would only work in well maintained and managed open source types of applications really. But who has time for all that?

    Anyway, it’s something I’ve been thinking about a lot at my current job as an architect for a major corporation. I’ve had to do a lot of side work to get things even part of the way there. And I don’t have to deal with multiple OSes and architectures. But I think it’s an underserved area of software development and distribution that is just not “fun” enough to get much attention. I’d love to see it at all levels of software.

  • SwingingTheLamp@midwest.social
    link
    fedilink
    arrow-up
    57
    arrow-down
    5
    ·
    2 days ago

    One that Linux should’ve had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.

    A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they’re backwards-compatible with, so new versions don’t necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren’t present, allowing programs to drop certain features gracefully, so we wouldn’t need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.

    Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.

    • steeznson@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      16 hours ago

      I find the Darwin approach to dynamic linking too restrictive. Sometimes there needs to be a new release which is not backwards compatible or you end up with Windows weirdness. It is also too restrictive on volunteer developers giving their time to open source.

      At the same time, containerization where we throw every library - and the kitchen sink - at an executable to get it to run does not seem like progress to me. It’s like the meme where the dude is standing on a huge horizontal pile of ladders to look over a small wall.

      At the moment you can choose to use a distro which follows a particular approach to this problem; one which enthuses its developers, giving some guarantee of long term support. This free market of distros that we have at the moment is ideal in my opinion.

    • LovableSidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      The term “dependency hell” reminds me of “DLL hell” Windows devs used to refer to. Something must have changed around 2000 because I remember an article announcing, “No more DLL hell.” but I don’t remember what the change was.