You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(11) |
Apr
(46) |
May
(65) |
Jun
(85) |
Jul
(94) |
Aug
(99) |
Sep
(62) |
Oct
(58) |
Nov
(85) |
Dec
(39) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(90) |
Feb
(29) |
Mar
(90) |
Apr
(96) |
May
(78) |
Jun
(58) |
Jul
(44) |
Aug
(65) |
Sep
(40) |
Oct
(38) |
Nov
(79) |
Dec
(63) |
2002 |
Jan
(53) |
Feb
(61) |
Mar
(43) |
Apr
(53) |
May
(35) |
Jun
(59) |
Jul
(18) |
Aug
(12) |
Sep
(28) |
Oct
(61) |
Nov
(54) |
Dec
(23) |
2003 |
Jan
(16) |
Feb
(42) |
Mar
(38) |
Apr
(35) |
May
(20) |
Jun
(9) |
Jul
(10) |
Aug
(30) |
Sep
(22) |
Oct
(32) |
Nov
(25) |
Dec
(21) |
2004 |
Jan
(39) |
Feb
(36) |
Mar
(59) |
Apr
(32) |
May
(21) |
Jun
(4) |
Jul
(8) |
Aug
(21) |
Sep
(11) |
Oct
(21) |
Nov
(22) |
Dec
(19) |
2005 |
Jan
(62) |
Feb
(24) |
Mar
(17) |
Apr
(16) |
May
(16) |
Jun
(17) |
Jul
(26) |
Aug
(14) |
Sep
(13) |
Oct
(8) |
Nov
(23) |
Dec
(20) |
2006 |
Jan
(41) |
Feb
(18) |
Mar
(21) |
Apr
(47) |
May
(13) |
Jun
(33) |
Jul
(32) |
Aug
(21) |
Sep
(27) |
Oct
(34) |
Nov
(19) |
Dec
(46) |
2007 |
Jan
(21) |
Feb
(26) |
Mar
(13) |
Apr
(22) |
May
(5) |
Jun
(19) |
Jul
(56) |
Aug
(43) |
Sep
(37) |
Oct
(31) |
Nov
(53) |
Dec
(22) |
2008 |
Jan
(74) |
Feb
(31) |
Mar
(15) |
Apr
(35) |
May
(23) |
Jun
(26) |
Jul
(17) |
Aug
(27) |
Sep
(35) |
Oct
(30) |
Nov
(29) |
Dec
(17) |
2009 |
Jan
(35) |
Feb
(39) |
Mar
(44) |
Apr
(28) |
May
(20) |
Jun
(28) |
Jul
(49) |
Aug
(53) |
Sep
(23) |
Oct
(13) |
Nov
(12) |
Dec
(11) |
2010 |
Jan
(45) |
Feb
(28) |
Mar
(41) |
Apr
(11) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
(2) |
2
(6) |
3
(1) |
4
|
5
|
6
|
7
|
8
|
9
|
10
(1) |
11
|
12
|
13
|
14
|
15
(1) |
16
(6) |
17
|
18
|
19
(2) |
20
|
21
|
22
(4) |
23
|
24
|
25
(1) |
26
|
27
|
28
|
|
|
|
|
|
From: Juhana S. <ko...@ni...> - 2005-02-25 16:55:16
|
> I want to use mostly low level > operations like setpixel (x, y) in C / C++ - is MESA suitable for this? But it can do it, and I don't know better option than to use OpenGL. If you're still in the list, please tell us what it is what you want to do? Juhana -- http://music.columbia.edu/mailman/listinfo/linux-graphics-dev for developers of open source graphics software |
From: Mathieu M. <mat...@ki...> - 2005-02-22 19:10:24
|
Brian Paul wrote: > Brian Paul wrote: > >> Mathieu Malaterre wrote: >> >>> Hello, >>> >>> I am using Mesa 6.2.1 and I am getting a SIGFPE: >>> >>> [Thread debugging using libthread_db enabled] >>> [New Thread 1100658336 (LWP 13415)] >>> >>> Program received signal SIGFPE, Arithmetic exception. >>> [Switching to Thread 1100658336 (LWP 13415)] >>> _mesa_test_os_sse_exception_support () at x86/common_x86_asm.S:193 >>> 193 DIVPS ( XMM0, XMM1 ) >>> Current language: auto; currently asm >>> (gdb) up >>> #1 0x4017f7e4 in check_os_sse_support () at x86/common_x86.c:192 >>> 192 _mesa_test_os_sse_exception_support(); >>> >>> >>> After some googling it seems this has already been reported: >>> >>> http://www.mail-archive.com/dri...@li.../msg08493.html >>> >> >> >> >> Hmmm, I must have missed that posting to the dri-devel list. I'll try >> out the patch and check it in if it seems OK. >> > > I misread the date on that posting. That was two years ago. The code > in common_x86_asm.S has changed since then. So I don't think the patch > is relevant anymore. > > However, the problem you report isn't really an issue. When gdb stops > upon the exception, just type 'continue'. > > There's a big comment about this in the current file. Ooops I am sorry. My first reflex is always do google before actually reading the file... Sorry for the noise Mathieu |
From: Brian P. <bri...@tu...> - 2005-02-22 17:41:42
|
Brian Paul wrote: > Mathieu Malaterre wrote: > >> Hello, >> >> I am using Mesa 6.2.1 and I am getting a SIGFPE: >> >> [Thread debugging using libthread_db enabled] >> [New Thread 1100658336 (LWP 13415)] >> >> Program received signal SIGFPE, Arithmetic exception. >> [Switching to Thread 1100658336 (LWP 13415)] >> _mesa_test_os_sse_exception_support () at x86/common_x86_asm.S:193 >> 193 DIVPS ( XMM0, XMM1 ) >> Current language: auto; currently asm >> (gdb) up >> #1 0x4017f7e4 in check_os_sse_support () at x86/common_x86.c:192 >> 192 _mesa_test_os_sse_exception_support(); >> >> >> After some googling it seems this has already been reported: >> >> http://www.mail-archive.com/dri...@li.../msg08493.html > > > Hmmm, I must have missed that posting to the dri-devel list. I'll try > out the patch and check it in if it seems OK. > I misread the date on that posting. That was two years ago. The code in common_x86_asm.S has changed since then. So I don't think the patch is relevant anymore. However, the problem you report isn't really an issue. When gdb stops upon the exception, just type 'continue'. There's a big comment about this in the current file. -Brian |
From: Brian P. <bri...@tu...> - 2005-02-22 17:38:49
|
Mathieu Malaterre wrote: > Hello, > > I am using Mesa 6.2.1 and I am getting a SIGFPE: > > [Thread debugging using libthread_db enabled] > [New Thread 1100658336 (LWP 13415)] > > Program received signal SIGFPE, Arithmetic exception. > [Switching to Thread 1100658336 (LWP 13415)] > _mesa_test_os_sse_exception_support () at x86/common_x86_asm.S:193 > 193 DIVPS ( XMM0, XMM1 ) > Current language: auto; currently asm > (gdb) up > #1 0x4017f7e4 in check_os_sse_support () at x86/common_x86.c:192 > 192 _mesa_test_os_sse_exception_support(); > > > After some googling it seems this has already been reported: > > http://www.mail-archive.com/dri...@li.../msg08493.html Hmmm, I must have missed that posting to the dri-devel list. I'll try out the patch and check it in if it seems OK. -Brian |
From: Mathieu M. <mat...@ki...> - 2005-02-22 16:53:43
|
Hello, I am using Mesa 6.2.1 and I am getting a SIGFPE: [Thread debugging using libthread_db enabled] [New Thread 1100658336 (LWP 13415)] Program received signal SIGFPE, Arithmetic exception. [Switching to Thread 1100658336 (LWP 13415)] _mesa_test_os_sse_exception_support () at x86/common_x86_asm.S:193 193 DIVPS ( XMM0, XMM1 ) Current language: auto; currently asm (gdb) up #1 0x4017f7e4 in check_os_sse_support () at x86/common_x86.c:192 192 _mesa_test_os_sse_exception_support(); After some googling it seems this has already been reported: http://www.mail-archive.com/dri...@li.../msg08493.html Index: common_x86_asm.S =================================================================== RCS file: /cvsroot/dri/xc/xc/extras/Mesa/src/X86/common_x86_asm.S,v retrieving revision 1.16 diff -u -r1.16 common_x86_asm.S --- common_x86_asm.S 25 Nov 2002 19:57:08 -0000 1.16 +++ common_x86_asm.S 8 Jan 2003 21:27:57 -0000 @@ -225,13 +225,13 @@ MOVUPS ( REGIND( ESP ), XMM1 ) - ADD_L ( CONST( 32 ), ESP ) - DIVPS ( XMM0, XMM1 ) /* Restore the original MXCSR register value. */ LDMXCSR ( REGOFF( -4, EBP ) ) + + ADD_L ( CONST( 32 ), ESP ) LEAVE RET But it seems my copy of Mesa do not use this patch. Comments ? Thanks Mathieu |
From: Brian P. <bri...@tu...> - 2005-02-19 20:10:11
|
Hayawardh Vijayakumar wrote: > Please give detailed instructions on how to get MESA up and running on > Linux, for programming with C / C++. Did you read any of the documentation on the website? There's instructions for downloading, compiling and installing Mesa. The demo programs show how to use the library. > I want to use mostly low level > operations like setpixel (x, y) in C / C++ - is MESA suitable for this? Not really. -Brian |
From: Hayawardh V. <hay...@ma...> - 2005-02-19 13:40:13
|
<P>Please give detailed instructions on how to get MESA up and running on L= inux, for programming with C / C++. I want to use mostly low level operatio= ns like setpixel (x, y) in C / C++ - is MESA suitable for this?</= P> <P>Thanks in advance.</P> <P>Hayawardh</P><BR> --=20 <p>___________________________________________________________<br>Sign-up f= or Ads Free at Mail.com<br> <a href=3D"http://mail01.mail.com/scripts/payment/adtracking.cgi?bannercode= =3Dadsfreejump01" target=3D"_blank">" rel=nofollow>http://www.mail.com/?sr=3Dsignup</a></p> |
From: Nathan S. <ns...@nc...> - 2005-02-16 03:21:22
|
> As Brian said, Mesa for Windows is software rendering only. It is unlikely > that NI or anyone else has added hardware acceleration by talking directly > to the hardware on the card. Perhaps Mesa could use the ICD interface - not > sure. Well, if NI's Mesa can't do hardware acceleration, that would explain why it feels "slow". |
From: Nathan S. <ns...@nc...> - 2005-02-16 03:21:07
|
> Have you contacted the vendor of LabVIEW? I've started a thread about it over at the LabVIEW forums, but I haven't gotten any replies yet. |
From: Karl S. <k.w...@co...> - 2005-02-16 01:12:17
|
At 05:52 PM 2/15/2005, Nathan Smyth wrote: > > Just use the native OpenGL library/driver. > >Whose OpenGL library/driver? Do MSFT operating systems ship with OpenGL >[i.e. non-DirectX] stuff pre-installed? I thought that was the whole >problem... Windows comes with an OpenGL library. Typically, when one installs a video card, they also install OpenGL 'ICD' drivers that are called by the Microsoft OpenGL library. If no ICD exists, the MS OpenGL will perform software rendering and/or use some pretty basic acceleration provided at the GDI level - not sure. In any case, you really need the vendor's drivers to fully leverage the card. >Anyway, for me at least, I need to learn as much about this "mesa.dll" that >National Instruments ships with LabVIEW [whether or not it can be >accelerated in hardware, and, if so, which cards/chipsets offer the best >drivers]. As Brian said, Mesa for Windows is software rendering only. It is unlikely that NI or anyone else has added hardware acceleration by talking directly to the hardware on the card. Perhaps Mesa could use the ICD interface - not sure. That being said, see http://www.scitechsoft.com/products/ent/gld_home.php SciTech did contribute the code to layer Mesa on top of Direct3D and it is in Mesa's CVS repository. But no one has really worked on it and it is not complete. It is possible that NI is using the SciTech product, but you'd have to ask them. Karl >------------------------------------------------------- >SF email is sponsored by - The IT Product Guide >Read honest & candid reviews on hundreds of IT Products from real users. >Discover which products truly live up to the hype. Start reading now. >http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >_______________________________________________ >Mesa3d-users mailing list >Mes...@li... >https://lists.sourceforge.net/lists/listinfo/mesa3d-users |
From: Brian P. <bri...@tu...> - 2005-02-16 01:00:34
|
Nathan Smyth wrote: >>Just use the native OpenGL library/driver. > > > Whose OpenGL library/driver? Do MSFT operating systems ship with OpenGL > [i.e. non-DirectX] stuff pre-installed? I thought that was the whole > problem... Well, if one buys an ATI or NVIDIA or 3Dlabs card (for example), it typically comes with a CD-ROM with drivers and misc programs. The user will install the software from the CD-ROM, or download a newer driver from the vendor's website. They'll have hardware-accelerated OpenGL at that point. I don't know what sort of OpenGL drivers, if any, are included with a stock Windows XP installation. > Anyway, for me at least, I need to learn as much about this "mesa.dll" that > National Instruments ships with LabVIEW [whether or not it can be > accelerated in hardware, and, if so, which cards/chipsets offer the best > drivers]. Maybe another reader can comment on that. Have you contacted the vendor of LabVIEW? -Brian |
From: Nathan S. <ns...@nc...> - 2005-02-16 00:52:12
|
> Just use the native OpenGL library/driver. Whose OpenGL library/driver? Do MSFT operating systems ship with OpenGL [i.e. non-DirectX] stuff pre-installed? I thought that was the whole problem... Anyway, for me at least, I need to learn as much about this "mesa.dll" that National Instruments ships with LabVIEW [whether or not it can be accelerated in hardware, and, if so, which cards/chipsets offer the best drivers]. |
From: Brian P. <bri...@tu...> - 2005-02-16 00:46:34
|
Nathan Smyth wrote: > Does anyone have any tips about optimizing hardware acceleration of > National Instruments' LabVIEW Mesa Implementation on Win32 platforms? I > ask because LabVIEW feels a little slow to me [especially on older > hardware], and when I upgrade our graphics cards, I'd like to get > something that is known to work well with NI's Mesa. > > Apparently NI puts all their Mesa stuff in one giant file, called > "mesa.dll" [almost 1 MB in size], which in e.g. LabVIEW 7.0 exists as > [variously] > > C:\Program Files\National Instruments\LabVIEW 7.0\resource\*mesa.dll* > C:\Program Files\National Instruments\Shared\Mesa\*mesa.dll* > C:\Program Files\National Instruments\Shared\LabVIEW > Run-Time\7.0\*mesa.dll* > > That's about all I know of NI's Mesa, other than one mention in a > LabWindows forum [LabWindows is a slightly different product than > LabVIEW itself] which indicates that if you update your LabWindows/CVI > Runtime, you /might/ get a newer version of "mesa.dll": > > http://forums.ni.com/ni/board/message?board.id=180&message.id=13948 > <" rel=nofollow>http://forums.ni.com/ni/board/message?board.id=180&message.id=13948> > > Anyway, are there any good chipsets and/or graphics cards that work well > with NI's Mesa? Or are there any bad chipsets and/or graphics cards that > I should avoid? And are there any configuration files or Registry > settings that I could tweak to get better performance? > > Finally, we've got some older machines that don't have AGP slots: Does > anyone know of a good PCI card that would work well with NI's > Mesa? Likewise, are there any PCI cards that I should avoid? For > instance, I know that e.g. the DRI project doesn't support Matrox's PCI > cards [but I don't know whether that would have any bearing on NI's Mesa]: > > http://dri.freedesktop.org/wiki/Matrox?action=highlight&value=CategoryHardware > <" rel=nofollow>http://dri.freedesktop.org/wiki/Matrox?action=highlight&value=CategoryHardware> > > Thanks for any suggestions you might have! If you're interested in hardware rendering, you don't need Mesa at all. Just use the native OpenGL library/driver. Mesa, on Windows anyway, only does software rendering. Historically, various sci-vis apps on Unix/Linux have used the hardware-accelerated OpenGL library when present, and used Mesa's software rendering as a fallback when the former was absent. That said, I don't know anything about LabVIEW or how it uses Mesa. -Brian |
From: Nathan S. <ns...@nc...> - 2005-02-15 23:39:03
|
Does anyone have any tips about optimizing hardware acceleration of = National Instruments' LabVIEW Mesa Implementation on Win32 platforms? I = ask because LabVIEW feels a little slow to me [especially on older = hardware], and when I upgrade our graphics cards, I'd like to get = something that is known to work well with NI's Mesa. Apparently NI puts all their Mesa stuff in one giant file, called = "mesa.dll" [almost 1 MB in size], which in e.g. LabVIEW 7.0 exists as = [variously] C:\Program Files\National Instruments\LabVIEW 7.0\resource\mesa.dll C:\Program Files\National Instruments\Shared\Mesa\mesa.dll C:\Program Files\National Instruments\Shared\LabVIEW = Run-Time\7.0\mesa.dll That's about all I know of NI's Mesa, other than one mention in a = LabWindows forum [LabWindows is a slightly different product than = LabVIEW itself] which indicates that if you update your LabWindows/CVI = Runtime, you might get a newer version of "mesa.dll": = http://forums.ni.com/ni/board/message?board.id=3D180&message.id=3D13948 Anyway, are there any good chipsets and/or graphics cards that work well = with NI's Mesa? Or are there any bad chipsets and/or graphics cards that = I should avoid? And are there any configuration files or Registry = settings that I could tweak to get better performance? Finally, we've got some older machines that don't have AGP slots: Does = anyone know of a good PCI card that would work well with NI's Mesa? = Likewise, are there any PCI cards that I should avoid? For instance, I = know that e.g. the DRI project doesn't support Matrox's PCI cards [but I = don't know whether that would have any bearing on NI's Mesa]: = http://dri.freedesktop.org/wiki/Matrox?action=3Dhighlight&value=3DCategor= yHardware Thanks for any suggestions you might have! |
From: Brian P. <bri...@tu...> - 2005-02-10 15:23:23
|
A few different development/maintence tasks have been mentioned on the dev list recently so I updated the "help wanted" page on the website. Maybe we'll get some new contributors. On http://www.mesa3d.org/ click on "Help Wanted" under the Developers Topics list. Thanks. -Brian |
From: <p...@di...> - 2005-02-03 07:01:13
|
On Wed 02 Feb 05, 4:58 PM, Brian Paul said: > Peter Jay Salzman wrote: > > >But my wife has a Voodoo 3 in her machine, which also uses glide3 and the > >tdfx module. Her system claims that she has a wide range of point sizes > >and line widths! > > > According to the tdfx driver code (line 280 of tdfx_context.c): > > /* No wide points. > */ > ctx->Const.MinPointSize = 1.0; > ctx->Const.MinPointSizeAA = 1.0; > ctx->Const.MaxPointSize = 1.0; > ctx->Const.MaxPointSizeAA = 1.0; > > /* Disable wide lines as we can't antialias them correctly in > * hardware. > */ > ctx->Const.MinLineWidth = 1.0; > ctx->Const.MinLineWidthAA = 1.0; > ctx->Const.MaxLineWidth = 1.0; > ctx->Const.MaxLineWidthAA = 1.0; > ctx->Const.LineWidthGranularity = 1.0; > > I don't know how anything but 1.0 would be reported with the Voodoo3. I just figured how. I was ssh'ing into her system to run the test program. There was a message that said something to the effect that I wasn't allowed to use DRI, so that implies it fell back on software rendering. I just went over to her monitor and tried it. Sure enough - line width and point sizes of only 1. > >OK. Guess it's time for a new video card, then. But in the meantime, is > >there a way to force software rendering? > > Do this before you run your app: > setenv LIBGL_ALWAYS_INDIRECT 1 Yeah -- that worked, suitably translated into bash ;-). I guess hardware rendering is screwed up. Which is wierd. Whatever functionality is broken must not be getting used by real games like quake[1-3] and .*doom. This is good enough now. It'll allow me to continue along in the redbook. It was a bummer writing my own little learning programs and not being able to see the correct (or, often, incorrect) output. :-) Thanks for your help! Getting back to vertex arrays... Pete |
From: Brian P. <bri...@tu...> - 2005-02-02 23:56:44
|
Peter Jay Salzman wrote: > On Wed 02 Feb 05, 12:18 PM, Brian Paul <bri...@tu...> said: > >>Peter Jay Salzman wrote: >> >>>I've been going through the Red Book to teach myself OpenGL, but I think >>>something is very wrong with OpenGL on this system. >>> >>> >>>System specifics: >>>----------------- >>>My system is a dual Celeron 333 with Debian/testing and a Voodoo 5 card. > > > snip > > >>>My wife's system is an Athlon 1.4 with Debian/testing and a Voodoo 3 card. >>>Everything appears to work on her system. >>> >>> >>> >>>The errors: >>>----------- >>>I didn't think anything was wrong before writing my own OpenGL stuff; >>>sdlquake2 on my system runs great using the SDL OpenGL rendering option. >>>But there appears to be two things wrong. Maybe they're related. >>> >>>First, this code: >>> >>> glGetFloatv( GL_ALIASED_LINE_WIDTH_RANGE, lineInfo ); >>> glGetFloatv( GL_ALIASED_POINT_SIZE_RANGE, pointInfo ); >>> >>> printf("line min: %f, max: %f\n", lineInfo[0], lineInfo[1]); >>> printf("point min: %f, max: %f\n", pointInfo[0], pointInfo[1]); >>> >>>on my system produces: >>> >>> cpu vendor: GenuineIntel >>> MMX cpu detected. >>> libGL: using Glide library libglide3.so.3 >>> line min: 1.000000, max: 1.000000 >>> point min: 1.000000, max: 1.000000 >>> >>>Only 1 width for lines and 1 size for points: that can't be right. >> >>It's allowed by the OpenGL spec. Glide probably doesn't allow wider >>lines/points. > > > But my wife has a Voodoo 3 in her machine, which also uses glide3 and the > tdfx module. Her system claims that she has a wide range of point sizes and > line widths! According to the tdfx driver code (line 280 of tdfx_context.c): /* No wide points. */ ctx->Const.MinPointSize = 1.0; ctx->Const.MinPointSizeAA = 1.0; ctx->Const.MaxPointSize = 1.0; ctx->Const.MaxPointSizeAA = 1.0; /* Disable wide lines as we can't antialias them correctly in * hardware. */ ctx->Const.MinLineWidth = 1.0; ctx->Const.MinLineWidthAA = 1.0; ctx->Const.MaxLineWidth = 1.0; ctx->Const.MaxLineWidthAA = 1.0; ctx->Const.LineWidthGranularity = 1.0; I don't know how anything but 1.0 would be reported with the Voodoo3. >>>On my >>>wife's system, the same executable (I scp'ed it over) produced what I was >>>expecting it to produce: >>> >>> line min: 1.000000, max: 10.000000 >>> point min: 1.000000, max: 10.000000 >> >>Are you sure you're using the hardware driver on this system, and not >>software rendering? glxinfo should tell you. > > > I don't see where it says hw or sw rendering, but it does say "direct > rendering" which sounds like "hardware rendering"? Direct rendering would imply hardware rendering. The "OpenGL renderer" line of glxinfo gives precise info about the hardware driver. >>>The second error: I drew two square polygons: >>> >>> >>> void RenderScene(void) >>> { >>> glClear (GL_COLOR_BUFFER_BIT); >>> glColor3f( 1.0, 1.0, 1.0 ); >>> >>> glBegin( GL_POLYGON ); >>> // CCW on the left will be filled >>> glVertex2f( 50.0f, 100.0f ); >>> glVertex2f( 300.0f, 100.0f ); >>> glVertex2f( 300.0f, 300.0f ); >>> glVertex2f( 50.0f, 300.0f ); >>> glEnd(); >>> >>> glBegin( GL_POLYGON ); >>> // CW on the right will be wireframe >>> glVertex2f( 450.0f, 100.0f ); >>> glVertex2f( 450.0f, 300.0f ); >>> glVertex2f( 700.0f, 300.0f ); >>> glVertex2f( 700.0f, 100.0f ); >>> glEnd(); >>> >>> glFlush(); >>> } >>> >>> >>> int main(int argc, char *argv[]) >>> { >>> // Glut Init stuff snipped >>> >>> glClearColor(0.0, 0.0, 0.0, 0.0); >>> glShadeModel (GL_FLAT); >>> glPolygonMode( GL_FRONT, GL_FILL ); >>> glPolygonMode( GL_BACK, GL_LINE ); >>> >>> // Glut MainLoop and return 0 snipped >>> } >>> >>> >>>On my system, the filled square on the left is as expected. However, the >>>wireframe square on the right only has a single side drawn. The top, >>>right, >>>and bottom lines appear to be be misssing. So this is what I see: >>> >>> xxxxxxxx x >>> xxxxxxxx x >>> xxxxxxxx x >>> xxxxxxxx x >>> xxxxxxxx x >>> >>>On my wife's system, it renders the way I was expecting it to: >>> >>> xxxxxxxx xxxxxxxx >>> xxxxxxxx xxxxxxxx >>> xxxxxxxx xxxxxxxx >>> xxxxxxxx xxxxxxxx >>> xxxxxxxx xxxxxxxx >>> >>>Setting MESA_DEBUG=1 didn't reveal any information. I really, really want >>>to get whatever is wrong fixed; it's ironic that my wife's system seems to >>>be OK and my system is having these OpenGL problems. >>> >>>Both machines run Debian testing and have the same packages. The only >>>difference is the video cards and monitors. My system has a Voodoo 5 and a >>>Philips flat screen monitor. Her system has a Voodoo 3 and a run of the >>>mill monitor. >>> >>>Does anyone have any idea what could be going on with my system, and more >>>importantly, how to fix it? >> >>I don't think anyone's worked on the tdfx DRI driver in years. There >>could certainly be some substantial bugs in it. >> >>If you're just doing basic things and learning OpenGL, software >>rendering with Mesa should suffice. > > > OK. Guess it's time for a new video card, then. But in the meantime, is > there a way to force software rendering? Do this before you run your app: setenv LIBGL_ALWAYS_INDIRECT 1 > Would I comment out the file in XF86Config that loads the dri module? You don't have to do that. -Brian |
From: <p...@di...> - 2005-02-02 22:01:02
|
On Wed 02 Feb 05, 12:18 PM, Brian Paul <bri...@tu...> said: > Peter Jay Salzman wrote: > >I've been going through the Red Book to teach myself OpenGL, but I think > >something is very wrong with OpenGL on this system. > > > > > >System specifics: > >----------------- > >My system is a dual Celeron 333 with Debian/testing and a Voodoo 5 card. snip > >My wife's system is an Athlon 1.4 with Debian/testing and a Voodoo 3 card. > >Everything appears to work on her system. > > > > > > > >The errors: > >----------- > >I didn't think anything was wrong before writing my own OpenGL stuff; > >sdlquake2 on my system runs great using the SDL OpenGL rendering option. > >But there appears to be two things wrong. Maybe they're related. > > > >First, this code: > > > > glGetFloatv( GL_ALIASED_LINE_WIDTH_RANGE, lineInfo ); > > glGetFloatv( GL_ALIASED_POINT_SIZE_RANGE, pointInfo ); > > > > printf("line min: %f, max: %f\n", lineInfo[0], lineInfo[1]); > > printf("point min: %f, max: %f\n", pointInfo[0], pointInfo[1]); > > > >on my system produces: > > > > cpu vendor: GenuineIntel > > MMX cpu detected. > > libGL: using Glide library libglide3.so.3 > > line min: 1.000000, max: 1.000000 > > point min: 1.000000, max: 1.000000 > > > >Only 1 width for lines and 1 size for points: that can't be right. > > It's allowed by the OpenGL spec. Glide probably doesn't allow wider > lines/points. But my wife has a Voodoo 3 in her machine, which also uses glide3 and the tdfx module. Her system claims that she has a wide range of point sizes and line widths! > >On my > >wife's system, the same executable (I scp'ed it over) produced what I was > >expecting it to produce: > > > > line min: 1.000000, max: 10.000000 > > point min: 1.000000, max: 10.000000 > > Are you sure you're using the hardware driver on this system, and not > software rendering? glxinfo should tell you. I don't see where it says hw or sw rendering, but it does say "direct rendering" which sounds like "hardware rendering"? > >The second error: I drew two square polygons: > > > > > > void RenderScene(void) > > { > > glClear (GL_COLOR_BUFFER_BIT); > > glColor3f( 1.0, 1.0, 1.0 ); > > > > glBegin( GL_POLYGON ); > > // CCW on the left will be filled > > glVertex2f( 50.0f, 100.0f ); > > glVertex2f( 300.0f, 100.0f ); > > glVertex2f( 300.0f, 300.0f ); > > glVertex2f( 50.0f, 300.0f ); > > glEnd(); > > > > glBegin( GL_POLYGON ); > > // CW on the right will be wireframe > > glVertex2f( 450.0f, 100.0f ); > > glVertex2f( 450.0f, 300.0f ); > > glVertex2f( 700.0f, 300.0f ); > > glVertex2f( 700.0f, 100.0f ); > > glEnd(); > > > > glFlush(); > > } > > > > > > int main(int argc, char *argv[]) > > { > > // Glut Init stuff snipped > > > > glClearColor(0.0, 0.0, 0.0, 0.0); > > glShadeModel (GL_FLAT); > > glPolygonMode( GL_FRONT, GL_FILL ); > > glPolygonMode( GL_BACK, GL_LINE ); > > > > // Glut MainLoop and return 0 snipped > > } > > > > > >On my system, the filled square on the left is as expected. However, the > >wireframe square on the right only has a single side drawn. The top, > >right, > >and bottom lines appear to be be misssing. So this is what I see: > > > > xxxxxxxx x > > xxxxxxxx x > > xxxxxxxx x > > xxxxxxxx x > > xxxxxxxx x > > > >On my wife's system, it renders the way I was expecting it to: > > > > xxxxxxxx xxxxxxxx > > xxxxxxxx xxxxxxxx > > xxxxxxxx xxxxxxxx > > xxxxxxxx xxxxxxxx > > xxxxxxxx xxxxxxxx > > > >Setting MESA_DEBUG=1 didn't reveal any information. I really, really want > >to get whatever is wrong fixed; it's ironic that my wife's system seems to > >be OK and my system is having these OpenGL problems. > > > >Both machines run Debian testing and have the same packages. The only > >difference is the video cards and monitors. My system has a Voodoo 5 and a > >Philips flat screen monitor. Her system has a Voodoo 3 and a run of the > >mill monitor. > > > >Does anyone have any idea what could be going on with my system, and more > >importantly, how to fix it? > > I don't think anyone's worked on the tdfx DRI driver in years. There > could certainly be some substantial bugs in it. > > If you're just doing basic things and learning OpenGL, software > rendering with Mesa should suffice. OK. Guess it's time for a new video card, then. But in the meantime, is there a way to force software rendering? Would I comment out the file in XF86Config that loads the dri module? Thanks! Pete -- The mathematics of physics has become ever more abstract, rather than more complicated. The mind of God appears to be abstract but not complicated. He also appears to like group theory. -- Tony Zee's "Fearful Symmetry" GPG Fingerprint: B9F1 6CF3 47C4 7CD8 D33E 70A9 A3B9 1945 67EA 951D |
From: Brian P. <bri...@tu...> - 2005-02-02 19:18:19
|
Jiayuan Zhu wrote: > HI, > I am a newbie to Mesa. I downloaded the mesa lib for MS Windows > system to run my opengl applications and find it great. My application > only uses GL core lib, but not GLU or GLUT. So I've got several > questions about the Mesa lib > > 1) I put the compiled OpenGL32.dll under the same directory as my > executable(a very simple one) and it works(without utilizing > OSMesa32.dll). Therefore, I was wondering what's the purpose of > placing OSMesa32.dll over there. To my understand, Mesa is a > software-based rendering library and the OpenGL32.dll does the whole > thing..... The OSMesa32.dll file should implement the "OSMesa" functions used for off-screen rendering. If you don't know what that is, you probably don't need to be concerned with it. > 2)I am wondering if you can tell me how to add some simple OpenGL APIs > into the current Mesa system. For example, one of my applications > calls an ARB extension "wglGetExtensionStringARB" which is not > available in Mesa, is there any simple way to add it in? Sure, add them to Mesa/src/mesa/drivers/windows/gdi/wgl.c -Brian |
From: Brian P. <bri...@tu...> - 2005-02-02 19:16:23
|
Peter Jay Salzman wrote: > I've been going through the Red Book to teach myself OpenGL, but I think > something is very wrong with OpenGL on this system. > > > System specifics: > ----------------- > My system is a dual Celeron 333 with Debian/testing and a Voodoo 5 card. > Installed Mesa packages: > > mesademos 6.2.1-1 Example programs for Mesa (and OpenGL in gen > xlibmesa-dev 4.3.0.dfsg.1-1 XFree86 Mesa development libraries dummy pac > xlibmesa-dri 4.3.0.dfsg.1-1 Mesa 3D graphics library modules [XFree86] > xlibmesa-gl 4.3.0.dfsg.1-1 Mesa 3D graphics library [XFree86] > xlibmesa-gl-de 4.3.0.dfsg.1-1 Mesa 3D graphics library development files [ > xlibmesa-glu 4.3.0.dfsg.1-1 Mesa OpenGL utility library [XFree86] > xlibmesa-glu-d 4.3.0.dfsg.1-1 Mesa OpenGL utility library development file > xlibosmesa-dev 4.3.0.dfsg.1-1 Mesa off-screen rendering library developmen > xlibosmesa3 4.2.1-12.1 Mesa off-screen rendering library [XFree86] > xlibosmesa4 4.3.0.dfsg.1-1 Mesa off-screen rendering library [XFree86] > > My wife's system is an Athlon 1.4 with Debian/testing and a Voodoo 3 card. > Everything appears to work on her system. > > > > The errors: > ----------- > I didn't think anything was wrong before writing my own OpenGL stuff; > sdlquake2 on my system runs great using the SDL OpenGL rendering option. > But there appears to be two things wrong. Maybe they're related. > > First, this code: > > glGetFloatv( GL_ALIASED_LINE_WIDTH_RANGE, lineInfo ); > glGetFloatv( GL_ALIASED_POINT_SIZE_RANGE, pointInfo ); > > printf("line min: %f, max: %f\n", lineInfo[0], lineInfo[1]); > printf("point min: %f, max: %f\n", pointInfo[0], pointInfo[1]); > > on my system produces: > > cpu vendor: GenuineIntel > MMX cpu detected. > libGL: using Glide library libglide3.so.3 > line min: 1.000000, max: 1.000000 > point min: 1.000000, max: 1.000000 > > Only 1 width for lines and 1 size for points: that can't be right. It's allowed by the OpenGL spec. Glide probably doesn't allow wider lines/points. > On my > wife's system, the same executable (I scp'ed it over) produced what I was > expecting it to produce: > > line min: 1.000000, max: 10.000000 > point min: 1.000000, max: 10.000000 Are you sure you're using the hardware driver on this system, and not software rendering? glxinfo should tell you. > The second error: I drew two square polygons: > > > void RenderScene(void) > { > glClear (GL_COLOR_BUFFER_BIT); > glColor3f( 1.0, 1.0, 1.0 ); > > glBegin( GL_POLYGON ); > // CCW on the left will be filled > glVertex2f( 50.0f, 100.0f ); > glVertex2f( 300.0f, 100.0f ); > glVertex2f( 300.0f, 300.0f ); > glVertex2f( 50.0f, 300.0f ); > glEnd(); > > glBegin( GL_POLYGON ); > // CW on the right will be wireframe > glVertex2f( 450.0f, 100.0f ); > glVertex2f( 450.0f, 300.0f ); > glVertex2f( 700.0f, 300.0f ); > glVertex2f( 700.0f, 100.0f ); > glEnd(); > > glFlush(); > } > > > int main(int argc, char *argv[]) > { > // Glut Init stuff snipped > > glClearColor(0.0, 0.0, 0.0, 0.0); > glShadeModel (GL_FLAT); > glPolygonMode( GL_FRONT, GL_FILL ); > glPolygonMode( GL_BACK, GL_LINE ); > > // Glut MainLoop and return 0 snipped > } > > > On my system, the filled square on the left is as expected. However, the > wireframe square on the right only has a single side drawn. The top, right, > and bottom lines appear to be be misssing. So this is what I see: > > xxxxxxxx x > xxxxxxxx x > xxxxxxxx x > xxxxxxxx x > xxxxxxxx x > > On my wife's system, it renders the way I was expecting it to: > > xxxxxxxx xxxxxxxx > xxxxxxxx xxxxxxxx > xxxxxxxx xxxxxxxx > xxxxxxxx xxxxxxxx > xxxxxxxx xxxxxxxx > > Setting MESA_DEBUG=1 didn't reveal any information. I really, really want > to get whatever is wrong fixed; it's ironic that my wife's system seems to > be OK and my system is having these OpenGL problems. > > Both machines run Debian testing and have the same packages. The only > difference is the video cards and monitors. My system has a Voodoo 5 and a > Philips flat screen monitor. Her system has a Voodoo 3 and a run of the > mill monitor. > > Does anyone have any idea what could be going on with my system, and more > importantly, how to fix it? I don't think anyone's worked on the tdfx DRI driver in years. There could certainly be some substantial bugs in it. If you're just doing basic things and learning OpenGL, software rendering with Mesa should suffice. -Brian |
From: <p...@di...> - 2005-02-02 15:11:50
|
I've been going through the Red Book to teach myself OpenGL, but I think something is very wrong with OpenGL on this system. System specifics: ----------------- My system is a dual Celeron 333 with Debian/testing and a Voodoo 5 card. Installed Mesa packages: mesademos 6.2.1-1 Example programs for Mesa (and OpenGL in gen xlibmesa-dev 4.3.0.dfsg.1-1 XFree86 Mesa development libraries dummy pac xlibmesa-dri 4.3.0.dfsg.1-1 Mesa 3D graphics library modules [XFree86] xlibmesa-gl 4.3.0.dfsg.1-1 Mesa 3D graphics library [XFree86] xlibmesa-gl-de 4.3.0.dfsg.1-1 Mesa 3D graphics library development files [ xlibmesa-glu 4.3.0.dfsg.1-1 Mesa OpenGL utility library [XFree86] xlibmesa-glu-d 4.3.0.dfsg.1-1 Mesa OpenGL utility library development file xlibosmesa-dev 4.3.0.dfsg.1-1 Mesa off-screen rendering library developmen xlibosmesa3 4.2.1-12.1 Mesa off-screen rendering library [XFree86] xlibosmesa4 4.3.0.dfsg.1-1 Mesa off-screen rendering library [XFree86] My wife's system is an Athlon 1.4 with Debian/testing and a Voodoo 3 card. Everything appears to work on her system. The errors: ----------- I didn't think anything was wrong before writing my own OpenGL stuff; sdlquake2 on my system runs great using the SDL OpenGL rendering option. But there appears to be two things wrong. Maybe they're related. First, this code: glGetFloatv( GL_ALIASED_LINE_WIDTH_RANGE, lineInfo ); glGetFloatv( GL_ALIASED_POINT_SIZE_RANGE, pointInfo ); printf("line min: %f, max: %f\n", lineInfo[0], lineInfo[1]); printf("point min: %f, max: %f\n", pointInfo[0], pointInfo[1]); on my system produces: cpu vendor: GenuineIntel MMX cpu detected. libGL: using Glide library libglide3.so.3 line min: 1.000000, max: 1.000000 point min: 1.000000, max: 1.000000 Only 1 width for lines and 1 size for points: that can't be right. On my wife's system, the same executable (I scp'ed it over) produced what I was expecting it to produce: line min: 1.000000, max: 10.000000 point min: 1.000000, max: 10.000000 The second error: I drew two square polygons: void RenderScene(void) { glClear (GL_COLOR_BUFFER_BIT); glColor3f( 1.0, 1.0, 1.0 ); glBegin( GL_POLYGON ); // CCW on the left will be filled glVertex2f( 50.0f, 100.0f ); glVertex2f( 300.0f, 100.0f ); glVertex2f( 300.0f, 300.0f ); glVertex2f( 50.0f, 300.0f ); glEnd(); glBegin( GL_POLYGON ); // CW on the right will be wireframe glVertex2f( 450.0f, 100.0f ); glVertex2f( 450.0f, 300.0f ); glVertex2f( 700.0f, 300.0f ); glVertex2f( 700.0f, 100.0f ); glEnd(); glFlush(); } int main(int argc, char *argv[]) { // Glut Init stuff snipped glClearColor(0.0, 0.0, 0.0, 0.0); glShadeModel (GL_FLAT); glPolygonMode( GL_FRONT, GL_FILL ); glPolygonMode( GL_BACK, GL_LINE ); // Glut MainLoop and return 0 snipped } On my system, the filled square on the left is as expected. However, the wireframe square on the right only has a single side drawn. The top, right, and bottom lines appear to be be misssing. So this is what I see: xxxxxxxx x xxxxxxxx x xxxxxxxx x xxxxxxxx x xxxxxxxx x On my wife's system, it renders the way I was expecting it to: xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx Setting MESA_DEBUG=1 didn't reveal any information. I really, really want to get whatever is wrong fixed; it's ironic that my wife's system seems to be OK and my system is having these OpenGL problems. Both machines run Debian testing and have the same packages. The only difference is the video cards and monitors. My system has a Voodoo 5 and a Philips flat screen monitor. Her system has a Voodoo 3 and a run of the mill monitor. Does anyone have any idea what could be going on with my system, and more importantly, how to fix it? Thanks! Peter |
From: Jiayuan Z. <jia...@gm...> - 2005-02-02 06:57:54
|
HI, I am a newbie to Mesa. I downloaded the mesa lib for MS Windows system to run my opengl applications and find it great. My application only uses GL core lib, but not GLU or GLUT. So I've got several questions about the Mesa lib 1) I put the compiled OpenGL32.dll under the same directory as my executable(a very simple one) and it works(without utilizing OSMesa32.dll). Therefore, I was wondering what's the purpose of placing OSMesa32.dll over there. To my understand, Mesa is a software-based rendering library and the OpenGL32.dll does the whole thing..... 2)I am wondering if you can tell me how to add some simple OpenGL APIs into the current Mesa system. For example, one of my applications calls an ARB extension "wglGetExtensionStringARB" which is not available in Mesa, is there any simple way to add it in? Hope to hear from you soon and thanks a lot for your time! Jiayuan Zhu Ath 221 Dept of Computing Science University of Alberta T6G 2E8 Canada |
From: Brian P. <bri...@tu...> - 2005-02-01 15:21:56
|
Daniel Sperka wrote: > I am using mesa-solo in an application that attempts to generate simple > 3d scenes at the full frame rate of the video card. I use an /etc/drirc > file with the following option for my app: > > <option name="vblank_mode" value="3"/> > > > this option is supposed to synchronize the swap to the VBLANK period. > This much works fine -- we've had great success in our early testing. > > I want to KNOW FOR CERTAIN that the swap occurred for a given frame, and > that it didn't skip a frame or two in the process. When that happens, I > need to know how many frames were missed. IFAIK, glXSwapBuffers() will never return without having done the front/back swap. > In the miniglx interface there doesn't seem to be any access to the > various methods - like queryFrameTracking -- which would answer my > questions. In particular, queryFrameTracking is a member of > 'DRIdrawable', but that struct is opaque to a glX client application > like mine (it is defined in miniglxP.h). > > What's a good way to implement such a call (there might be better calls > buried inside the mesa drivers, I'd be happy with one of those) without > TOO much of a hack? You should probably ask your question on the mesa3d-dev or dri-devel mailing lists. -Brian |
From: Daniel S. <djs...@uc...> - 2005-02-01 00:11:06
|
I am using mesa-solo in an application that attempts to generate simple 3d scenes at the full frame rate of the video card. I use an /etc/drirc file with the following option for my app: <option name="vblank_mode" value="3"/> this option is supposed to synchronize the swap to the VBLANK period. This much works fine -- we've had great success in our early testing. I want to KNOW FOR CERTAIN that the swap occurred for a given frame, and that it didn't skip a frame or two in the process. When that happens, I need to know how many frames were missed. In the miniglx interface there doesn't seem to be any access to the various methods - like queryFrameTracking -- which would answer my questions. In particular, queryFrameTracking is a member of 'DRIdrawable', but that struct is opaque to a glX client application like mine (it is defined in miniglxP.h). What's a good way to implement such a call (there might be better calls buried inside the mesa drivers, I'd be happy with one of those) without TOO much of a hack? Dan |