[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
|
|
Subscribe / Log in / New account

A line in the sand for graphics drivers

By Jonathan Corbet
July 5, 2010
Support for certain classes of hardware has often been problematic for the Linux kernel, and 3D graphics chips have tended to be at the top of the list. Over the last few years, through a combination of openness at Intel and AMD/ATI and reverse engineering for NVIDIA, the graphics problem has mostly been solved - for desktop systems. The situation in the fast-growing mobile space is not so comforting, though. As can be seen in recent conversations, free support for mobile graphics looks like the next big problem to be solved.

At a first glance, the announcement of a 2D/3D driver for Qualcomm "ES 3D" graphics cores (found in the Snapdragon processor which, in turn, is found in a number of high-end smartphones) seems like a good thing. Graphics support for this core is one of the binary blobs which is necessary to run Android on that processor, and it seemed like Qualcomm was saying the right things:

I'm writing this email because we think it is high time that we get off the bench, into the game and push support for the Qualcomm graphics cores to the mainline kernel. We are looking for advice and comment from the community on the approach we have taken and what steps we might need to take, if any, to modify the driver so it can be accepted.

Advice and comment is what he got. The problem is that, while the kernel driver is GPL-licensed, it is only a piece of the driver. The code which does the real work of making 3D function on that GPU runs in user space, and it remains locked-down and proprietary. Dave Airlie, the kernel graphics maintainer, made it quite clear what he thinks of such drivers:

We are going to start to see a number of companies in the embedded space submitting 3D drivers for mobile devices to the kernel. I'd like to clarify my position once so they don't all come asking the same questions.

If you aren't going to create an open userspace driver (either MIT or LGPL) then don't waste time submitting a kernel driver to me.

Dave's message explains his reasoning in detail; little of it will be new to most LWN readers. He is concerned about possible licensing issues and, at several levels, about the kernel community's ability to verify the driver and to fix it as need be. Dave has also expressed his resentment on how the mobile chipset vendors are getting great value from Linux but seem to be entirely unwilling to give back to the kernel they have come to depend on so heavily.

This move may strike some people as surprising. There has been a lot of pressure to get Android-related code into the mainline, but now an important component is being rejected - again. The fact that user-space code is at issue is also significant. The COPYING file shipped with the kernel begins with this text:

NOTE! This copyright does *not* cover user programs that use kernel services by normal system calls - this is merely considered normal use of the kernel, and does *not* fall under the heading of "derived work".

Normally, kernel developers see user space as a different world with its own rules; it is not at all common for kernel maintainers to insist on free licensing for user-space components. Dave's raising of licensing issues might also seem, on its face, to run counter to the above text: he is saying explicitly that closed user-space graphics drivers might be a work derived from the kernel and, thus, a violation of the GPL. These claims merit some attention.

The key text above is "normal system calls." A user-space graphics driver does not communicate with its kernel counterpart with normal system calls; it will use, instead, a set of complex operations which exist only to support that particular chipset. The kernel ABI for graphics drivers is not a narrow or general-purpose interface. The two sides are tightly coupled, a fact which has made the definition of the interface between them into a difficult task - and the maintenance of it almost as hard. While a program using POSIX system calls is clearly not derived from the kernel, the situation with a user-space graphics driver is not nearly so clear.

It should also be pointed out that, while the kernel community does not normally try to dictate licensing in user space, that community has also never felt bound to add interfaces for the sole use of proprietary code. The resistance to the addition of hooks for anti-malware programs is a classic example.

But licensing is not the only issue here. In a sense, user-space 3D graphics drivers are really kernel components which simply happen to be running in a separate address space. They necessarily have access to privileged functionality, and they must program a general-purpose processor (the GPU) with the ability to easily hose the system. Without the user-space component, the kernel will not function well. Like other pieces of the kernel, the full 3D driver must be carefully examined to be sure that there are no security problems, fatal bugs, or portability issues. The kernel developers must be able to make changes to the kernel-side driver with full knowledge of what effect those changes will have on the full picture. A proprietary user-space driver clearly makes all of this more difficult - if not impossible.

User-space binary blob drivers also miss out on many of the important benefits of the free software development process. They will contain bugs and a great deal of duplicated code.

What Dave (and others) are clearly hoping is that, by pushing back in this way, they will be able to motivate vendors to open up their user-space drivers as well. The history in this regard is encouraging, but mixed. Over time, hardware vendors have generally come to realize that the value they are selling is not in the drivers and that they can make their lives easier by getting their code out into the open. What did it gain all of those wireless networking vendors to implement and maintain their own 802.11 stacks? One can only imagine that they must be glad to be relieved of that burden. But getting them to that point generally required pressure from the kernel development community.

Hopefully, this pressure will convince at least some mobile 3D vendors to open up as well. That pressure would be increased and made far more effective if at least some device manufacturers would insist on free software support for the components they use. There are companies working in this area which make a lot of noise about their support for Linux. They could do a lot to make Linux better by insisting on that same support from their suppliers.

Over the years, we have seen that pushing back against binary-only drivers has often resulted in changes for the better; we now have free support for a lot of hardware which was once only supported by proprietary drivers. Some vendors never relent, but enough usually have that the recalcitrant ones can simply be routed around. One shudders to think about what our kernel might look like now had things not gone that way. The prevalence of binary-only drivers in the mobile space shows that this fight is not done, though. 3D graphics drivers are unique in many aspects, including their use of user-space components. But, if we want to have a free kernel in the coming years, we need to hope that they will be subject to the same pressures.

Index entries for this article
KernelDevice drivers/Graphics


to post comments

A line in the sand for graphics drivers

Posted Jul 5, 2010 13:52 UTC (Mon) by Baylink (guest, #755) [Link] (11 responses)

I posted this to Dave's LJ, but I'll ask here, as well:

Has anyone ever put a finger on the assertions I've seen made in some places: that the vendors won't open that code because it has patent or copyright violations in it, and they know it, and they don't want to get busted...?

A line in the sand for graphics drivers

Posted Jul 5, 2010 14:35 UTC (Mon) by tzafrir (subscriber, #11501) [Link] (9 responses)

So Intel and ATI ones just happen not to have any trade secrets?

As for patents: what would be the problem with releasing the source? What secret would it reveal? It's patently clear that patents can not cover trade secrets, right?

A line in the sand for graphics drivers

Posted Jul 5, 2010 14:39 UTC (Mon) by Baylink (guest, #755) [Link] (4 responses)

As I thought I'd noted: it might reveal that the vendor who won't open the drives is themselves violating *someone else's patent*.

A line in the sand for graphics drivers

Posted Jul 9, 2010 9:57 UTC (Fri) by yoe (guest, #25743) [Link] (3 responses)

Everyone who writes some software is violating everyone else's patent. That's not an issue.

It becomes one if you violate someone's patent knowingly; but proving that you knew about something at some distant point in the past is rather hard.

A line in the sand for graphics drivers

Posted Jul 9, 2010 10:21 UTC (Fri) by __alex (guest, #38036) [Link] (1 responses)

It does make it much easier for patent trolls to spam vendors with litigation if they can simply grep through some source files for bits of code that look vaguely infringing though right? I have no idea if vendors actually think this way but I can see where this argument comes from.

A line in the sand for graphics drivers

Posted Dec 27, 2010 15:11 UTC (Mon) by ksandstr (guest, #60862) [Link]

Program code cannot be analyzed for patent violations by simply grepping through it.

A line in the sand for graphics drivers

Posted Jul 9, 2010 17:14 UTC (Fri) by njs (subscriber, #40338) [Link]

Whether you knew about the patent only affects how much the patent owner can demand in damages -- they can in any case stop you from shipping your product and demand a chunk of all your past revenue.

Everyone who writes some software is violating other people's patents; the problem is if one of those patent owners notices and can sue you.

A line in the sand for graphics drivers

Posted Jul 5, 2010 14:52 UTC (Mon) by ernstp (guest, #13694) [Link] (3 responses)

Intel and ATI have written new implementations from scratch in the open, not open sourced existing code. That's where the big problem lies I think...

A line in the sand for graphics drivers

Posted Jul 5, 2010 15:36 UTC (Mon) by drag (guest, #31333) [Link] (2 responses)

It's a combination of different issues.

They have contractual obligations with other vendors to keep some aspects of their hardware secret. This is due to the DRM requirements they have to face. This is simply part of the reality of being a graphics hardware vendor in this day in age.

Patent issues are another one.

Copyrights can be a issue, but it depends on the vendors and how much they contract out to other businesses.

Trade secrets are another issue. Between ATI vs Nvidia they make or break sales by the quality of their windows drivers. That is simply a fact and is another aspect of business that cannot be escaped.

Those issues are going to be big ones.

---------------------

Writing new open source drivers from scratch, while combined with not documenting certain aspects of the video cards, can avoid most of those issues except patents.

A line in the sand for graphics drivers

Posted Jul 5, 2010 16:14 UTC (Mon) by smurf (subscriber, #17840) [Link] (1 responses)

They have contractual obligations with other vendors to keep some aspects of their hardware secret. This is due to the DRM requirements they have to face.

DRM implies that somewhere there's encrypted / obfuscated content which some piece of code and/or hardware decrypts before displaying, and that said code needs to be coupled to the displaying hardware -- tightly enough that one cannot just capture the output.

Hardware coupling is obviously not a problem in a smartphone (unlike HDMI).

That leaves the actual de-obfuscating part of the source code, which is easily removed from any otherwise-open-source driver program.

A line in the sand for graphics drivers

Posted Jul 5, 2010 16:20 UTC (Mon) by drag (guest, #31333) [Link]

> Hardware coupling is obviously not a problem in a smartphone (unlike HDMI).

I think that is a poor assumption to make. How many PowerVR video devices are used in set-top boxes, televisions, and other devices? I don't know the answer, but it's certainly a large number of devices.

Plus there are phones that do actually have HDMI output, many of those that do are Android devices.

Unfortunately there is plenty of DRM in cell-phone and ARM-style devices.

> encrypted / obfuscated

Yeah. DRM implies Obfuscation really only. Encryption is usually used as part of the process because it's very effective in terms of protection during transport, but it's useless as a mechanism on the actual end user's device itself. So DRM depends entirely on obfuscation to work properly.

DRM is much less of a issue now then it's been in the past, but the effects of it is going to linger on for some years.

A line in the sand for graphics drivers

Posted Jul 5, 2010 21:03 UTC (Mon) by airlied (subscriber, #9104) [Link]

For x86 drivers there is a lot of things they consider secret and they share a driver between Windows and Linux. But as I said in the original post, embedded is a different field, they don't have a Windows driver, since nobody really cares about it, it the secondary platform in this space.

The only thing that might be an issue is the video decode hardware, and I'm willing to accept that maybe they can't always release source code to the MPEG4 patent licenses etc, but generally in a lot of those systems it just a hw video decoder, so they aren't violating anything in sw if they just provide an API.

A line in the sand for graphics drivers

Posted Jul 5, 2010 14:24 UTC (Mon) by fuhchee (guest, #40059) [Link] (1 responses)

It would be good to elaborate on the perceived key differences between user-space video drivers such as those in xfree86 for a decade+, with or without kernel-side helper code, and user-space video drivers such as these new ones.

A line in the sand for graphics drivers

Posted Jul 5, 2010 14:34 UTC (Mon) by jrn (subscriber, #64214) [Link]

Umm... the video drivers in xfree86 are free, and these new ones are not?

s/driver/documentation/

Posted Jul 5, 2010 14:48 UTC (Mon) by avik (guest, #704) [Link] (35 responses)

I don't think it's right to demand an open driver. Instead, demand full documentation of the interface. Every bit of the buffers passed into the kernel should be documented, and if anyone is interested, they can write the user code for that.

s/driver/documentation/

Posted Jul 5, 2010 15:04 UTC (Mon) by hummassa (guest, #307) [Link] (25 responses)

> I don't think it's right to demand an open driver.

I think it's right, for someone who volunteers to maintain that part of the kernel tree, to demand anything she seems fit to demand that will make possible to do said maintenance, before she merges some patch from someone.

A closed driver has the potential make that maintenance difficult (or even impossible), so...

s/driver/documentation/

Posted Jul 5, 2010 15:38 UTC (Mon) by kirkengaard (guest, #15022) [Link]

And it isn't simply arbitrary.

The trouble, as was mentioned, is that this isn't just an interface. We have a boatload of interfaces. I think one of the good comparisons is sound drivers. There are quite a few high-end cards that have chip drivers in ALSA, but you use JACK and some other userspace programs to get full function out of them. But the drivers are complete and perform properly (modulo some tinkering) for the chips they drive, and the userspace components sit over the driver or work through it. We're talking instead about half of a driver.

If the submitter wishes the driver to be in the kernel, it is perfectly appropriate for the developer to advise them that the whole driver needs to be open. One way to comply might be to document and re-engineer the kernel component in order to develop a compliant userspace component; another might be to write an all-kernelspace driver; another might be to rip out what "must" be binary blob for now and open everything else -- but only as a first step. I doubt that throwing documentation over the wall will make it acceptable as-is.

s/driver/documentation/

Posted Jul 5, 2010 16:10 UTC (Mon) by ebiederm (subscriber, #35028) [Link] (23 responses)

Please notice that Dave is a man and use the gender appropriate pronoun, or if you are speaking in general please use the plural, as in English that is always gender neutral.

A maintainer can ask for too much, and that is why in Linux there is always the possibility to route around a maintainer. Not that this happens often.

In this case one of the primary complaints is poor userspace ABI design, and that is always a valid issue to push back on. Even the most temporary, transient and little used ABIs requires years to phase out.

Overall it appears to be a good thing this conversation is happening, and I hope this can get settled before too much more time passes.

s/driver/documentation/

Posted Jul 5, 2010 16:41 UTC (Mon) by njs (subscriber, #40338) [Link] (22 responses)

> Please notice that Dave is a man and use the gender appropriate pronoun, or if you are speaking in general please use the plural, as in English that is always gender neutral.

...Huh, I thought it was pretty obvious that they were speaking in general, and the use of, say, alternating gender pronouns for generics is very common.

Do you make this comment every time someone uses a generic "he"?

s/driver/documentation/

Posted Jul 5, 2010 23:53 UTC (Mon) by hummassa (guest, #307) [Link] (21 responses)

And, furthermore, I was under the impression that "she" was a correct way to say "one person, any person, of any gender".

s/driver/documentation/

Posted Jul 6, 2010 1:53 UTC (Tue) by csamuel (✭ supporter ✭, #2624) [Link] (15 responses)

I would suggest "they" instead, as it's gender neutral (he and she aren't).

s/driver/documentation/

Posted Jul 6, 2010 3:38 UTC (Tue) by njs (subscriber, #40338) [Link]

Certainly "they" is an option, and often recommended for this; but "he or she", and picking one at random and alternating, are also both accepted and commonly used: http://www.unc.edu/depts/wcweb/handouts/gender.html

They all have their advantages and disadvantages, but I was more struck at the suggestion that using "she" was illegitimate, when in actual usage it plainly isn't.

s/driver/documentation/

Posted Jul 6, 2010 9:25 UTC (Tue) by rsidd (subscriber, #2582) [Link] (13 responses)

"They" is often ungrammatical when used for this purpose. At best, it leads to grammatical but horribly contorted sentence constructions. "She" is fine by me: no matter how much and how often "she" is used, it will take a while to overcome existing bias, leave alone historical bias. It's ok to refer to Dave, or other males, now and then as "she": it's certainly no worse than referring to a woman as "he", which happens to women all the time, and it does force readers to think about sexist language.

s/driver/documentation/

Posted Jul 6, 2010 10:44 UTC (Tue) by nix (subscriber, #2304) [Link] (7 responses)

What? People of unknown gender are singular they; people of known male/female gender are he/she. The unfortunate convention of referring to single people of unknown gender as 'he' is bad enough (and can nearly always be substituted with singular they or in extremis the clumsy 'he or she'), but referring to specific single people of known gender with the opposite-gendered singular personal pronoun is like spikes in the eyes. It's *always* wrong.

s/driver/documentation/

Posted Jul 6, 2010 10:54 UTC (Tue) by rsidd (subscriber, #2582) [Link] (6 responses)

What's "singular they"? Would you write "they sees fit" or "they merges some patch"? Doesn't sound like English to me.

Of course, you can use plural they if you make all references plural. This is usually awkward and sometimes impossible.

s/driver/documentation/

Posted Jul 6, 2010 11:36 UTC (Tue) by nye (subscriber, #51576) [Link]

>What's "singular they"?

Possibly you should have looked it up before proceeding.

When you talk about using the 'plural they', you are of course obliquely referring to the fact that 'they' remains morphologically plural in all (correct) uses, however its usage to refer to a singular subject is well established.

It has been the preferred style for decades, an accepted style for centuries, and an existing style in English since so long ago that the language is barely recognisable.

If you can present an example sentence where using 'he' or 'she' is grammatically correct, but 'they' is not, then I would be interested to hear it.

s/driver/documentation/

Posted Jul 6, 2010 11:56 UTC (Tue) by farnz (subscriber, #17727) [Link] (2 responses)

"Singular they", as used by authors from Shakespeare onwards, is things like "they see fit" and "they merge a patch". It's simply the same pattern as "singular you"; or art thou one of the people who insisteth that "you" must be reserved for the plural form, and who joketh about "you sees fit" and "you merges a patch"?

s/driver/documentation/

Posted Jul 20, 2010 17:10 UTC (Tue) by pdundas (guest, #15203) [Link] (1 responses)

Thou speakest wisely. But prithee tell, surely thou wantedst to say "thou insistest" or "thou jokest"?

I joke / thou jokest / he joketh, et ceterea...

s/driver/documentation/

Posted Jul 20, 2010 17:17 UTC (Tue) by pdundas (guest, #15203) [Link]

Doh! Thou art right. I shall don mine coat, and quit this thread.

s/driver/documentation/

Posted Jul 6, 2010 23:49 UTC (Tue) by csamuel (✭ supporter ✭, #2624) [Link]

It's "they see fit" (or "they saw fit" for past tense), "they are merging some patches" ("they merged some patches" for past tense).

s/driver/documentation/

Posted Jul 16, 2010 7:48 UTC (Fri) by dododge (guest, #2870) [Link]

Just as more background material: the OED lists the singular use of "they" as "often used" and gives numerous examples back to 1526 "Yf..a psalme scape ony persone, or a lesson, or else yt they omyt one verse or twayne."

For further reading they reference Jespersen's "Progress in Language", which discusses it in more detail and gives many more examples. You can find scans of the 1909 2nd edition at books.google.com, with the relevant text in section 24 on pages 27-30.

s/driver/documentation/

Posted Jul 6, 2010 11:06 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Traditionally, in English, you use the plural form as a highly respectful singular. So, for 1st person, you have the "royal we" - or use of 1st person plural for a singular entity. For second person, we've completely lost the 2nd person singular (thou), in favour of always using the 2nd person plural in its role as the respectful 2nd person singular. We also use 3rd person plural as a respectful 3rd person singular in English.

Arguably, the fix to the existing habit of subconsciously sexist language is not to just flip the sexism round some of the time, but to make the same move for 3rd person as we've made for 2nd person - drop he/she/it when referring to a singular entity (except when gender is important), and use the 3rd person plural form ("they are" instead of "he/she/it is") in its traditional role as a respectful singular.

So much grammar correction, so little correct!

Posted Jul 7, 2010 21:22 UTC (Wed) by baldridgeec (guest, #55283) [Link] (3 responses)

Or better still one could utilize a form which is less oft seen in informal English, but easily remembered by a student of linguistics or esp. Romance languages - the third person indefinite personal pronoun. It exists for precisely this sort of case.

So much grammar correction, so little correct!

Posted Jul 7, 2010 21:26 UTC (Wed) by farnz (subscriber, #17727) [Link] (2 responses)

Except that the modern English usage of "one" places it as a variation on the first person, not the third - one tends to use it not to mean "an unidentified individual", but to mean "an individual from the set that I would cover if I were to use we".

So much grammar correction, so little correct!

Posted Jul 7, 2010 21:51 UTC (Wed) by baldridgeec (guest, #55283) [Link] (1 responses)

Is that really accurate? My observation has been that one tends to use it on behalf of the group for which one is advocating in an argument, but I wouldn't want to rephrase the first half of this sentence using the phrase "I tend," because I'm not referring to myself, but to every instance I have ever heard or read in which the case was used.

(Rereading this before submission, I realize that you could just quote the above paragraph and respond with "QED." :) More meat follows below.)

I assume that that sort of observation (that it coincides with an individual from the first person plural set) stems from the fact that one does not often pose arguments which prescribe the behavior of groups which exclude oneself - that doesn't mean it can't happen though.

One may believe that one's computer is powered by hamsters on exercise wheels, but one would be incorrect. :)

So much grammar correction, so little correct!

Posted Jul 8, 2010 10:20 UTC (Thu) by farnz (subscriber, #17727) [Link]

It's a difficult one (the joys of a language defined by usage, not prescribed by an academy); in my experience the use of "one" is either a "posh way of saying I", or "this is what should happen in an ideal world, not necessarily what anyone in particular does". Singular they feels slightly weird, but doesn't come with that baggage.

Of course, this is all based on past experience - and continued use of "one" as a gender-neutral singular would change the implications. If only programming languages had a similar habit of changing to adapt to what is meant, not what it used to mean :)

s/driver/documentation/

Posted Jul 6, 2010 10:13 UTC (Tue) by nye (subscriber, #51576) [Link]

>I was under the impression that "she" was a correct way to say "one person, any person, of any gender".

As a purely factual point, this is incorrect in English.

s/driver/documentation/

Posted Jul 6, 2010 11:09 UTC (Tue) by sorpigal (guest, #36106) [Link] (3 responses)

In English the correct gender-neutral term is "he." Some people don't like this for various reasons and choose to substitute "she" as gender neutral, often in an attempt to combat a perception of male dominance or out of a sense of fairness. "They" is also often used as a neutral form but it is incorrect (ungrammatical) when used to refer to a singular entity.

Regardless of the reasons for the origin of the use, "he" and "him" are correct when the gender is unknown or ambiguous. Other forms, no matter how common, are not good English.

English

Posted Jul 6, 2010 11:43 UTC (Tue) by samth (guest, #1290) [Link]

Regardless of the reasons for the origin of the use, "he" and "him" are correct when the gender is unknown or ambiguous. Other forms, no matter how common, are not good English.
This is totally false. First, the use of singular "they" is long-standing and good English, used by "Addison, Austen, Chesterfield, Fielding, Ruskin, Scott, and Shakespeare", to quote the Chicago Manual of Style. Second, prescriptivism is wrong about language, as a general principle, and thus your second sentence is false regardless of the particular topic.

s/driver/documentation/

Posted Jul 6, 2010 11:49 UTC (Tue) by nye (subscriber, #51576) [Link]

>"They" is also often used as a neutral form but it is incorrect (ungrammatical) when used to refer to a singular entity.

Simply wrong. There is no reason to support this assertion; it's a modern invention with no reasoning behind it - simply an arbitrary decision by a handful of grammatical prescriptivists who choose to ignore the large corpus of historical English text, not to mention the overwhelming current usage.

Since we mostly hear it coming from Americans, I conjecture that it may originate in Strunk and White (a highly questionable but ubiquitous American grammar guide).

s/driver/documentation/

Posted Jul 6, 2010 13:24 UTC (Tue) by anselm (subscriber, #2796) [Link]

Regardless of the reasons for the origin of the use, "he" and "him" are correct when the gender is unknown or ambiguous. Other forms, no matter how common, are not good English.

Please take this over to the Language Log blog at http://languagelog.ldc.upenn.edu/nll, where linguists, i.e., professionals who know a great deal about things like English grammar and usage, will quickly disabuse you of this notion.

s/driver/documentation/

Posted Jul 5, 2010 17:08 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (6 responses)

Once you're at the point of something as complex as a graphics driver, interface documentation is unlikely to be both precise and accurate. Say we end up with an open kernel driver, a closed userspace implementation and an open userspace implementation. Part of the interface documentation can be interpreted in two different ways, and interpreting it one way gives a significant performance boost to the open component and breaks the closed component. Do we accept the patch or refuse the patch? What if one interpretation allows DMAing to arbitrary addresses?

And this ignores the fact that any interface documentation for a graphics driver's kernel component is likely to be of the form "This ioctl submits a buffer of GPU commands to the device". These commands will typically not be interpreted by the kernel code beyond certain sanity checking, so documenting the interface does little to tell us how to implement a userspace version of the same code. Interface documentation is better than no interface documentation, and hardware documentation is better still. But if we have a kernel component with a well-defined ABI then that impairs our ability to implement a userspace driver unless we also develop a parallel kernel component. And that way lies madness.

s/driver/documentation/

Posted Jul 5, 2010 17:35 UTC (Mon) by avik (guest, #704) [Link] (5 responses)

Once you're at the point of something as complex as a graphics driver, interface documentation is unlikely to be both precise and accurate.
Then the driver+documentation is unlikely to be accepted. We need to insist on quality docs, just as we insist on quality code.

Say we end up with an open kernel driver, a closed userspace implementation and an open userspace implementation. Part of the interface documentation can be interpreted in two different ways, and interpreting it one way gives a significant performance boost to the open component and breaks the closed component. Do we accept the patch or refuse the patch?
We request a clarification to the specification.

What if one interpretation allows DMAing to arbitrary addresses?
Then it's clearly a bug. Both driver and documentation need to be fixed.

None of the above change when you get an open source driver instead of documentation. In fact, a driver is less likely to be complete (since it won't implement all capabilities), though more likely to be accurate (since it can be tested). Documentation contains a superset of the information in a driver, since you can write a driver based on the documentation, but you can't write the entire documentation from reading the driver source.

And this ignores the fact that any interface documentation for a graphics driver's kernel component is likely to be of the form "This ioctl submits a buffer of GPU commands to the device".
Such documentation should be rejected. Documentation should describe exactly what happens with the bits, either directly or by referring to hardware documentation.

These commands will typically not be interpreted by the kernel code beyond certain sanity checking, so documenting the interface does little to tell us how to implement a userspace version of the same code. Interface documentation is better than no interface documentation, and hardware documentation is better still.
what you describe is not an interface documentation, but a tunnel documentation. That should be rejected.

But if we have a kernel component with a well-defined ABI then that impairs our ability to implement a userspace driver unless we also develop a parallel kernel component. And that way lies madness.
Certainly, I got confused just reading that last paragraph.

s/driver/documentation/

Posted Jul 5, 2010 17:40 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (4 responses)

If we have full hardware documentation and we're expected to write our own userspace, then we also want to write our own kernel code. There's no incentive whatsoever for us to merge the upstream provided kernel code. If we do then we provide an interface that we're expected to support forever (see the argument over nouveau breaking ABI), and the only consumer of that interface is a closed userspace driver.

s/driver/documentation/

Posted Jul 5, 2010 17:50 UTC (Mon) by avik (guest, #704) [Link] (3 responses)

If we have full hardware documentation and we're expected to write our own userspace, then we also want to write our own kernel code.
Why? If the driver is good, accept it.

There's no incentive whatsoever for us to merge the upstream provided kernel code.
You get not to write that much code.

If we do then we provide an interface that we're expected to support forever (see the argument over nouveau breaking ABI), and the only consumer of that interface is a closed userspace driver.
Of course we should encourage an open source userspace driver, and with full documentation, there is really no reason not to open source the driver. But as a rule, adding a syscall should not require adding a consumer for that syscall.

s/driver/documentation/

Posted Jul 5, 2010 17:56 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (2 responses)

Because merging the upstream kernel component requires either slavishly adopting the same interface (and thus limiting our ability to write a driver), keeping one kernel driver that supports two different interfaces (maintenance nightmare) or providing two different kernel drivers (one of which will probably end up bitrotting). If the only consumer of the kernel code is a closed driver, we don't want it. If accepting it constrains our ability to write an open driver, we don't want it. In summary - we don't want it. It's great as an example of driving the hardware, and if it comes with documentation it's an excellent basis for the driver that does actually get merged. But it's not going into the kernel.

s/driver/documentation/

Posted Jul 5, 2010 18:02 UTC (Mon) by avik (guest, #704) [Link] (1 responses)

If the interface isn't good enough for an open driver, you reject the patch. IOW existing code (open or closed) can not be used as an argument for forcing a bad API on the kernel.

You end up with one driver that can support both the (modified) closed user driver and a newly written open driver.

s/driver/documentation/

Posted Jul 5, 2010 20:23 UTC (Mon) by airlied (subscriber, #9104) [Link]

avik, I don't think you understand how complex these interfaces are.

Its not something you can assess in abstract form, i.e. until you've written a complete graphics driver, both kernel and userspace components you rarely know if the API you chose is going to be correct and performant. However you generally pick the 80% interface, go with that, and hope you can easily make the 100% interface on top of it later.

However unless a single group is developing the kernel and userspace drivers, generally that API is going to be useless and really constrain any one else. So we end up with a driver in the kernel providing an API that only a closed source userspace can exercise and an API that only a open source userspace can exercise, why should we introduce the first API at all.

I don't mind introducing reduced functionality kernel drivers that don't expose major userspace APIs to just serve as an example of how these GPUs work, I don't want a driver with an API commitment and no way for anyone to make sure the API continue working.

s/driver/documentation/

Posted Jul 5, 2010 20:30 UTC (Mon) by airlied (subscriber, #9104) [Link]

I actually said an open driver, or open docs + writing an open driver.

But merging a driver whose only use is exposing an API for a closed source userspace to use is neither of those things.

The API is the problem, adding a restrictive API that we have to maintain indefinitely with no userspace code to test it is the core of the problem from a maintainer point of view.

s/driver/documentation/

Posted Jul 8, 2010 10:07 UTC (Thu) by willnewton (guest, #68395) [Link]

It is my understanding that Intel (and their subcontractors) had full documentation of the GMA500 hardware under NDA, but were unable to produce a working driver.

Odd choice of licences

Posted Jul 5, 2010 15:21 UTC (Mon) by epa (subscriber, #39769) [Link] (17 responses)

Why 'MIT or LGPL' as the licence requirement for the userspace driver? Wouldn't GPL be okay too? (Ideally, I suppose, 'GPLv2 or later' permission, since then code could be moved into the kernel tree if necessary.)

Odd choice of licences

Posted Jul 5, 2010 15:44 UTC (Mon) by tao (subscriber, #17563) [Link]

Because X.org is licensed under the MIT/X11-license?

Odd choice of licences

Posted Jul 5, 2010 15:58 UTC (Mon) by vonbrand (subscriber, #4458) [Link] (15 responses)

The userspace pieces have to interact with assorted userspace programs, some of which are closed source, thus LGPL at most, preferrably MIT as that is the default license for X.org.

GPLv3 is out, as the kernel won't go GPLv3 in our lifetimes (and the need to exchange pieces with the userspace part is a real possibility); thus GPLv2+ is also out.

Odd choice of licences

Posted Jul 5, 2010 16:16 UTC (Mon) by mjr (guest, #6979) [Link]

Otherwise yeah, but I'd like to nitpick that GPL3 being out doesn't mean that GPL2+ is out; it merely means that the license would be compatible with GPL3(+) code in addition to (kernel) GPL2 code, thus potentially enabling the use of the code in more places.

Odd choice of licences

Posted Jul 5, 2010 16:46 UTC (Mon) by epa (subscriber, #39769) [Link] (13 responses)

Surely if MIT licence is okay, then 'GPL v2, or later, at your option' is also okay? If not, why not?

Odd choice of licences

Posted Jul 5, 2010 17:16 UTC (Mon) by bronson (subscriber, #4806) [Link]

Not if you want any chance of upstreaming the code into X11 or or any of the other graphics-related projects. They're probably thinking longer term.

Odd choice of licences

Posted Jul 5, 2010 22:54 UTC (Mon) by airlied (subscriber, #9104) [Link]

The code ends up linked into any 3D application, being GPL would be rather limiting on users of the code via the defined GL APIs. Hence MIT or LGPL.

Odd choice of licences

Posted Jul 5, 2010 23:52 UTC (Mon) by elanthis (guest, #6227) [Link] (10 responses)

Because the GPL in any version is far more restrictive than MIT and infects applications written on top of it. You'd basically be saying that the entire GL/Mesa/Gallium stack using the driver would have to be considered as GPL, and thus any apps written to those interfaces (which would load the GPL driver into the apps' address space and link it into the program as a whole) must also be GPL. Which in turn really sucks for anyone who uses their devices for actual work or play in the real world where much interesting software is still non-Free with no Free alternatives. Especially in gaming, which is a highly important use of computers and mobile devices for most people (something the Free Software community always seems to underestimate and ignore). Looking purely at a Free Software world, I don't think graphics hardware would even be as advanced as it is today, as the proprietary gaming market is really what pushed a lot of the innovation and advancement in hardware, like shaders and the general massive speed increase. Most high end professional rendering is still done on the CPU using a drastically different rendering model than OpenGL/DX, and the CAD market isn't really a huge user of much beyond basic polygon rendering. Games are what pushed graphics hardware to where it is, and every single moderately advanced gaming graphics engine is still developed in a speedy, schedule-oriented, high-risk fashion that the bazaar development model just isn't good for (you really need a tightly focused team of highly skilled individuals to push out a quality game from start to finish, not a constantly changing army of somewhat skilled hobbyists contributing bits and pieces of itch-scratching functionality over many years of development; I'm considering writing an article on why this is and what the potential solutions are for the Free Software world, potentially for LWN.)

Odd choice of licences

Posted Jul 8, 2010 19:28 UTC (Thu) by robert_s (subscriber, #42402) [Link] (8 responses)

"I'm considering writing an article on why this is and what the potential solutions are for the Free Software world, potentially for LWN."

Be warned if it's for LWN your ideas will have to be quite well thought out and cogent.

Talking about an "infectious GPL" isn't a good start.

Odd choice of licences

Posted Jul 9, 2010 16:35 UTC (Fri) by mpr22 (subscriber, #60784) [Link] (7 responses)

Actually, when the GPL is applied to a library, rather than a program, I'm entirely sympathetic to the viewpoint that describes it as "infectious".

Odd choice of licences

Posted Jul 9, 2010 17:24 UTC (Fri) by njs (subscriber, #40338) [Link] (6 responses)

The problem with the 'infection' metaphor is that infections are things that spread on their own, against the person being infected's will.

When a library is under the GPL, that's not an infection, it's a price -- you can use the library if you pay it back by freeing your own software in return, or you can not use the library and not pay that price. Totally up to you.

Ironically, some of the people who hate the idea of this kind of quid-pro-quo rant about how it's 'communist'. (And, to drive the point home, also hate the first-sale doctrine, traditional contract law as applied to EULAs, etc.) Really it's 'capitalism' they seem to object to.

Odd choice of licences

Posted Jul 12, 2010 9:03 UTC (Mon) by mpr22 (subscriber, #60784) [Link] (5 responses)

Which is fine until the price becomes "you can't write 3D games on Linux without first reimplementing the graphics driver's userspace library and everything that links against it".

Odd choice of licences

Posted Jul 12, 2010 14:10 UTC (Mon) by njs (subscriber, #40338) [Link] (4 responses)

Yes, that's generally agreed to be too high a price to impose (or, for the more free-software minded, it's agreed that the costs in terms of network effects of locking proprietary software out of the platform entirely are worse than the costs of that software existing). That's why everyone agrees that graphics drivers should be under something like the MIT license.

But that still has nothing to do with "infections".

Odd choice of licences

Posted Jul 12, 2010 15:40 UTC (Mon) by bronson (subscriber, #4806) [Link] (3 responses)

Nothing to do with infections? If a developer wants to use a GPLed library in a proprietary project, his choices are:
- Clean-room rewrite the library. Huge waste of time.
- Relicense the entire proprietary project under a GPL-compatible license.

I'm sympathetic to how it could appear like the license is trying to spread on its own. Obviously "infection" is not 100% accurate (what metaphor is?), but I haven't seen a better way to oversimplify this fairly unique aspect of the GPL. I'm afraid "infect" will be used until someone can think of a more appropriate term.

Odd choice of licences

Posted Jul 12, 2010 15:44 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link]

A simpler and less loaded term is "reciprocal" license as opposed to "permissive" license.

Odd choice of licences

Posted Jul 12, 2010 16:11 UTC (Mon) by nye (subscriber, #51576) [Link]

With a proprietary license, there isn't even the second option, just the first. And yet nobody describes them as 'infectious'. This makes it sound like the addition of that extra choice is a *bad* thing.

Odd choice of licences

Posted Jul 14, 2010 10:55 UTC (Wed) by mtorni (guest, #3618) [Link]

Regards using a GPLed library in a proprietary project, you suggested the choices are:
#1 Clean-room rewrite the library. Huge waste of time.
#2 Relicense the entire proprietary project under a GPL-compatible license.

I'd like to add two more options to permit a fair comparison:

#3 Relicense (buy) the free library under a license permitting use
#4 Use the library as such

Now the fair comparison goes:
To use an existing non-free library, apply option #1 or #3, #4
To use an existing GPL'd free library, apply option #1, #2 or #3
To use an existing MIT-licensed library, apply option #4 (option #2 and #3 are still recommended, and #1 might come later in the project if needs change)

With a GPL'd library you have one more choice in this setting.

The comparison becomes more interesting once you consider the options when using libraries in free software or BSD/MIT-licensed software.

It also happens frequently that most benefit would be had by not writing propriertary software at all to tap the most amount of existing free software and interested developers.

Odd choice of licences

Posted Jul 8, 2010 20:29 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

Please do write your ideas up!

What graphics card should one buy?

Posted Jul 5, 2010 16:15 UTC (Mon) by rbrito (guest, #66188) [Link] (11 responses)

I think that what I am posting here may be a silly question, but what should one user buy, in face of the current situation?

I would like to put together a new system for my development and one part that I have never understood very well is that related to graphics, particularly in the sense of being able to use it in its full potential.

The situation for desktops is now more comfortable, but it is still not 100% clear for a luser: for instance, its it OK to buy nvidia hardware? The idea that I may be supporting a company that only has its hardware working with reverse-engineered drivers doesn't seem right.

In comparison, AMD/ATI cards seem like they "should have the blessings", but the last time I saw the features that the radeon/radeonhd cards support, it had a good amount TODO items for cards for some time...

http://www.x.org/wiki/RadeonFeature
http://www.x.org/wiki/radeonhd%3Afeature
http://www.x.org/wiki/RadeonProgram

So, what should one Freedom-conscious user choose in face of the current situation?

Thanks for any comments.

What graphics card should one buy?

Posted Jul 5, 2010 17:04 UTC (Mon) by nix (subscriber, #2304) [Link] (2 responses)

ATI cards have a few TODOs left, but they *work*. Composition works. 3d works well enough for things like scorched3d to work. Shaders don't work yet, but they don't work for *any* cards under Linux (the Mesa layers aren't stable yet: that's part of Gallium). (That the features list says MOSTLY for all of these simply says that the drivers are ready when Gallium is, AIUI.)

Of the r600/r700 TODOs on that list:

Video decoding using the 3D engine and UVD do not prevent video playback: they only mean that the CPU has to do the video decoding. If you could play back a video on a lesser card, you'll be able to play it back on r600/r700 right now. Shaders are awaiting Gallium. Antialiasing I don't know about; HDMI audio I don't pay attention to as I've got no hardware that cares about it.

What graphics card should one buy?

Posted Jul 5, 2010 19:11 UTC (Mon) by svena (guest, #20177) [Link]

Shaders very much work in Mesa, and on the Intel side, at least as far as GLSL 1.20.

It's also starting to appear for the (Gallium) r300 driver.

I agree

Posted Jul 30, 2010 17:22 UTC (Fri) by moxfyre (guest, #13847) [Link]

AMD/ATI have their hearts and/or heads in the right place. They are supported full-featured 3D drivers with documentation and developer time. Most things work already (it's amazing how fast a 3D driver can be developed when the vendor cooperates!), and insofar as a few things don't, it's not because of vendor obstruction but just because of the large amount of complex code and documentation that has to be produced.

On the other hand, Nvidia has *never* helped with the development of the open-source Nouveau drivers. Those only work because of reverse-engineering.

Intel has been cooperating with and funding open-source graphics driver development for the longest time, so their drivers work well for nearly everything. Intel graphics on my laptop work flawlessly with suspend/HDMI/kernel mode-setting, etc. etc. etc.

So yeah, Intel > ATI > Nvidia in terms of practical features, and Intel ~ ATI >> Nvidia in terms of "vendor doing the right thing these days."

What graphics card should one buy?

Posted Jul 5, 2010 17:36 UTC (Mon) by bronson (subscriber, #4806) [Link] (3 responses)

In my experience: Intel > ATI > nVidia

There are exceptions of course (Intel's GMA500 screwup) but, in general, I try to go with Intel if you value compatibility and stability, and ATI if you want performance and don't mind wrestling with the drivers a bit.

This is the type of wrestling I mean, nothing major: http://bugs.freedesktop.org/show_bug.cgi?id=19943

What graphics card should one buy?

Posted Jul 5, 2010 18:23 UTC (Mon) by salimma (subscriber, #34460) [Link]

That's my experience too, but be warned that the latest ATI chipset (Evergreen a.k.a. R5xxx) is still not fully usable with the open-source driver; no clock-throttling (bad for battery life) and no DRI support yet, despite the hardware being released last autumn.

My netbook (Intel graphics) can perform 3D effects that puts my laptop (ATi) to shame, and the power drain from the GPU means I barely get 90 minutes of usage out of the standard battery.

What graphics card should one buy?

Posted Jul 6, 2010 18:38 UTC (Tue) by rriggs (guest, #11598) [Link] (1 responses)

Let's be clear: 1 year old ATI > current nVidia.

I cannot speak for Intel, as their video hardware isn't in the same league as the other two.

With proprietary drivers, nVidia wins hands down for ease of use.

OpenCL (GPGPU) support using open source drivers is non-existent. One must use proprietary drivers. And for this, I prefer ATI.

What graphics card should one buy?

Posted Jul 8, 2010 1:09 UTC (Thu) by brouhaha (subscriber, #1698) [Link]

I think there's little question that the nVidia proprietary drivers are good. I buy ATI rather than nVidia, even though I currently run proprietary drivers, because ATI supports open source while nVidia does not. nVidia is arguably more hostile toward open source than Microsoft.

If you need Blender, stick with proprietary nVidia

Posted Jul 7, 2010 9:02 UTC (Wed) by sdalley (subscriber, #18550) [Link] (2 responses)

If you're thinking of using Blender3D, the proprietary nVidia driver is the only reliable game in town at the moment.

On the X.org wiki RadeonProgram support matrix, Blender3D support is recorded as GOLD for the older Radeon R300 series. Looking at the small print, this means "(Blender) 2.49 requires low impact fallbacks to draw all interface symbols (stipple lines for lamp types, etc), but that affects speed. 2.50 requiries changing triple buffer mode to something else, or unusable (app problem, it seems to happen with other brands and operating systems too)."

The more recent R500 R600 series chipset support is rated as GARBAGE/UNKNOWN. The current R700 is SILVER, which, being translated, means "(26 Oct 2009) [mesa-git] Crashes on many operations and does not update its interface correctly."

For Blender, Radeon is not quite there yet, in other words. Stick with programs marked PLATINUM in the support matrix if you actually need to get stuff done.

If you need Blender, stick with proprietary nVidia

Posted Jul 7, 2010 22:12 UTC (Wed) by svena (guest, #20177) [Link]

Don't rely too much on the RadeonProgram wiki page, much of the information is quite outdated . Possibly because updating it is such a hassle.

If you need Blender, stick with proprietary nVidia

Posted Jul 8, 2010 19:23 UTC (Thu) by robert_s (subscriber, #42402) [Link]

"For Blender, Radeon is not quite there yet, in other words. Stick with programs marked PLATINUM in the support matrix if you actually need to get stuff done."

Nonsense. I've been using blender on my r200 with the open radeon driver for years without problems.

What graphics card should one buy?

Posted Jul 8, 2010 19:35 UTC (Thu) by robert_s (subscriber, #42402) [Link]

"So, what should one Freedom-conscious user choose in face of the current situation?"

AMD.

The Free drivers work fine for most uses. If you find yourself needing a particular advanced feature, you can use the nonfree driver until the Free one supports it.

A line in the sand for graphics drivers

Posted Jul 6, 2010 11:49 UTC (Tue) by nhippi (subscriber, #34640) [Link] (12 responses)

> Over the last few years, through a combination of openness at Intel and AMD/ATI and reverse engineering for NVIDIA, the graphics problem has mostly been solved - for desktop systems.

That is a extremely optimistic version of the Linux graphics story. While open drivers for intel,ati and nvidia exist, the window of RELIABLY working hardware is sometimes incredibly thin. Too old chip (Intel 855GM was really buggy around 2.4..2.9 versions of intel driver) and support is spotty. Too new (GMA500) and no support at all. Nouveau works really well only between NV30<->NV50. I would assume ATI has the same issue of code supporting old HW getting bitrotted and new HW support not being ready yet.

My estimate is that open graphics drivers in Linux would need at least 2-3x the current manpower to keep up with the hardware and kernel infrastructure changes.

A line in the sand for graphics drivers

Posted Jul 6, 2010 14:35 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

"That is a extremely optimistic version of the Linux graphics story. While open drivers for intel,ati and nvidia exist, the window of RELIABLY working hardware is sometimes incredibly thin. Too old chip (Intel 855GM was really buggy around 2.4..2.9 versions of intel driver) and support is spotty."

Wow. That's old.

"Too new (GMA500) and no support at all."

New _Intel_ chips are supported just fine. GMA500 is not Intel chip, it's licensed Poulsbo hardware.

"Nouveau works really well only between NV30<->NV50."

Who cares about earlier chips? And work on the recent Fermi cards is already in progress.

"I would assume ATI has the same issue of code supporting old HW getting bitrotted and new HW support not being ready yet."

ATI even supports kernel modesetting on R200!

A line in the sand for graphics drivers

Posted Jul 6, 2010 15:45 UTC (Tue) by nix (subscriber, #2304) [Link] (9 responses)

ATI even supports kernel modesetting on *r100*. As far as I know, everything that uses the ati driver supports KMS now, modulo bugs. (So that means the poor 1990s-vintage mach64 users are still out of luck ;} )

A line in the sand for graphics drivers

Posted Jul 7, 2010 10:03 UTC (Wed) by nhippi (subscriber, #34640) [Link] (4 responses)

> everything that uses the ati driver supports KMS now, modulo bugs

Like the intel driver supports i855? modulo the bugs that made it completely lock up at random moments in recent versions of the driver (2.10 version appears stable, wont try anything newer now that I have a working setup again).

Then again, I have no ATI display adapter use experience. So it just might be the only video driver that never hangs, shows corrupted textures, has problems resuming from suspend, etc when used with too old or new hardware variants...

The too old/too new issue is also with driver versions.

1. Report a bug on driver X.
2. "Thats too old, it might be fixed in the git head"
3. compile, try, hit another issue
4. "you are running it on too old kernel, upgrade to latest kernel from the drm tree"
5. compile, try, hit some other bugs
6. "you are running the developer versions of kernel and graphics driver, of course there are some bugs."

I'm not criticizing the driver developers. There is just too few dedicated and active X driver developers compared to the amount of hardware variants that needs supporting and the complexity of graphics driver development...

A line in the sand for graphics drivers

Posted Jul 7, 2010 18:03 UTC (Wed) by nix (subscriber, #2304) [Link]

Er, that's why I said 'modulo bugs'. Given the immense amount of variation between different ATI video cards it would be astonishing if all of them worked all of the time. Nonetheless, they mostly seem to mostly work (admittedly I don't do any high-end 3D stuff: possibly that is more broken).

A line in the sand for graphics drivers

Posted Jul 8, 2010 16:26 UTC (Thu) by Thalience (subscriber, #4217) [Link]

FWIW, the Intel 8xx chips have hardware issues (related to broken cache-coherency between GPU and CPU [0]). Older versions of the driver never tried to do any memory management, so were not impacted by the hardware problems.

I have an 855-based laptop as well, and the lockups are very frustrating. But it isn't a matter of "driver bugs" so much as "failure to find a good workaround for hardware bugs".

[0] http://bugs.freedesktop.org/show_bug.cgi?id=26345#c34

A line in the sand for graphics drivers

Posted Jul 12, 2010 19:37 UTC (Mon) by tajyrink (subscriber, #2750) [Link] (1 responses)

>> everything that uses the ati driver supports KMS now, modulo bugs
> Like the intel driver supports i855?

No, no such chip-wide breakage in the ati land. I've r200 working fine, and r100 is reportedly also pretty ok (for the class of hardware it is) under KMS. I've also r700 with the ati driver without a hitch and more stable (that is = stable) than the proprietary driver.

A line in the sand for graphics drivers

Posted Jul 15, 2010 16:00 UTC (Thu) by nix (subscriber, #2304) [Link]

Quite. Things that I've never managed to get running before (serious work stuff like Penumbra: Overture) are finally working. When shaders work too I suspect the world will end because there is no further reason for it to exist.

A line in the sand for graphics drivers

Posted Jul 8, 2010 22:11 UTC (Thu) by Tet (subscriber, #5433) [Link] (3 responses)

As far as I know, everything that uses the ati driver supports KMS now, modulo bugs

I must just be unlucky, then. It seems all of my ATI cards are among those with bugs (which yes, have all been reported). I have still yet to find an ATI card with working KMS :-(

A line in the sand for graphics drivers

Posted Jul 9, 2010 17:15 UTC (Fri) by nix (subscriber, #2304) [Link] (2 responses)

Are yours all ATI Mobility cards? They do seem to be more iffy than mainstream desktop cards, perhaps because every laptop vendor hooks the things up differently.

(I've got a 4870, FWIW.)

A line in the sand for graphics drivers

Posted Jul 10, 2010 22:01 UTC (Sat) by glisse (guest, #44837) [Link] (1 responses)

Tet can you give couple of link to your bug ? Just so i can take a look at them.

A line in the sand for graphics drivers

Posted Jul 10, 2010 22:50 UTC (Sat) by Tet (subscriber, #5433) [Link]

The one for the machine I'm on at the moment is 509031, and seems to already be assigned to you. I have to boot with nomodeset in order to get a usable display.

A line in the sand for graphics drivers

Posted Jul 6, 2010 17:21 UTC (Tue) by drag (guest, #31333) [Link]

> GMA500 is not Intel chip, it's licensed Poulsbo hardware.

It's a PowerVR SGX design, by 'Imagination Technologies'. It's the same exact sort of thing that is giving Linux fits on hardware.

Remember the Nokia N800 and N810? Those types of chips were so troublesome that those tablets used no acceleration at all!

http://en.wikipedia.org/wiki/PowerVR

The old Intel 8xx series 'Extreme 3D Blaster' type devices used some PowerVR licensed stuff, I think, but I have no idea how much. The GMA series, up until the GMA 500 and whatever Intel is using in Moorestown, were pure Intel design.

A line in the sand for graphics drivers

Posted Jul 6, 2010 17:45 UTC (Tue) by rahvin (guest, #16953) [Link] (1 responses)

Not being a programmer or EE I may have missed this, but it was my understanding that these graphics chips in mobile chips are one-off designs. For example, although the PVR core maybe similar every chip that includes the core has differences, and with each generation these chips change significantly. So even though there may only really be 3-4 ARM designs and 1-2 Graphics cores, combined every iteration is really a separate and distinct version with it's own bugs and kinks.

If the following is true, what is the point in ever merging any driver for these devices into mainline? If the driver is a one-off, has to be revised for every revision or version produced there could end up being a LOT of drivers that are all different. Factor out a decade and you could end up with a thousand different drivers for one-off designs that are abandoned/revised 6months-yearly.

I see no point in merging drivers for devices that have no stability, longevity or persistence. At least the Intel/ATI/Nvidia hardware is a consistent design for a period of time with many more sales per design and often a single design with multiple products. Some of these mobile designs could be used in only a few products and end up with little to no market share to where you are dealing with a design used by 0.00001% of the global population. The fear of bit-rot on these drivers is very well founded IMO.

Unless the manufactures can come together and promise to maintain an interface for a period of time where ALL products (or at least >50%) in the class use the same interface there is little purpose in merging IMO.

A line in the sand for graphics drivers

Posted Jul 8, 2010 8:01 UTC (Thu) by daniels (subscriber, #16193) [Link]

Not really. At least for Imagination (which I suspect accounts for over 90% of mobile GPUs on the sorts of platforms we're talking about) chips, there have only ever been two: the MBX previously, and the SGX now. The MBX, which shipped in but was not used by the Nokia N800 and N810, is a direct descendant of the core used in the Sega Dreamcast - remember that? It had different integration points with every chip, but I don't remember any meaningfully different versions of the core.

There are a number of different versions of the SGX (530 in the N900, 535 shipped by some other vendors, 540 shipped by Intel IIRC), all with slightly different abilities/restrictions, but they mostly behave the same from the driver's point of view. Some of those cores have different revisions to fix hardware issues, but again from the driver's point of view, this is generally just a one or two-liner.

This may already be available. . .

Posted Jul 8, 2010 14:57 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link] (1 responses)

While the focus on the technical and licensing aspects is great, there is room to improve the marketing of Free-Software-friendly vendors.
Is there a URL for a site aggregating sales links to the non-suck vendors? I'm going to be in the market soon.

This may already be available. . .

Posted Jul 30, 2010 17:25 UTC (Fri) by moxfyre (guest, #13847) [Link]

I would like to see such a site as well.

Which Android phones have the best vendor funding and documentation for free software driver development, especially graphics and wireless chipsets?

Having such information available would certainly influence my decision about which handset to buy.


Copyright © 2010, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds