Sorry if I seem to be picking on you but you just happened to make several incorrect and uninformed statements in this thread.
Don't be sorry... it is intelligent discussion who make a topic interesting...
The Windows implementation of Direct3D has no interface called IWineD3DSurface. I suggest you read about
Direct3D Surfaces. "A surface represents a linear area of display memory
and usually resides in the display memory of the display card, although surfaces can exist in system memory. Surfaces are managed by the IDirect3DSurface9 interface."
You are right, i have type to fast... it is Direct3DSurface...
Now, surface are not texture...A surface can be seen as a memory buffer where pixels are stored. It can be in main memory and/or in video memory depending of a lot of things : it's creation parameters, memory usage, locking strategy, driver, etc... In the case of sins, i don't know how they handle surface...
Any texture contains at least one surface... in the sins case, due to the mip mapping, texture have several surface, equal to the mip mapping level... sins load the texture in the ram, engine select the right surface in function of the object distance and send these surface via directx to the graphic ram...
Advantage of texture is that they can be mapped during rendering on primitives, surfaces can't
It is possible that Wine is faster than Microsoft's Direct3D for some games, I'm not contesting that possibility, although I am not prone to believing it unless some kind of benchmarking gets done.
Not Wine but OpenGL... and not "is" but "was"... related to sins, the game use directX9... directX9 is not good at mode switching...
A more substantive and modern performance difference arises because of the structure of the hardware drivers provided by hardware developers. Under DirectX, IHV drivers are kernel-mode drivers installed into the operating system. The user-mode portion of the API is handled by the DirectX runtime provided by Microsoft. Under OpenGL however, the IHV driver is broken into two parts: a user-mode portion that implements the OpenGL API, and a kernel-mode driver that is called by the user-mode portion.
The reason this is an issue is because calling kernel-mode operations from user-mode requires performing a system call (i.e. making the CPU switch to kernel mode). This is a slow operation, taking on the order of microseconds to complete. During this time, the CPU is unable to perform any operations. As such, a performance optimization would be to minimize the number of times this switching operation must be performed. For example, if the GPU’s command buffer is full of rendering data, the API could simply store the requested rendering call in a temporary buffer and, when the command buffer is close to being empty, it can perform a switch to kernel-mode and add a number of stored commands all at once. This is known as marshalling.
Because Direct3D IHV drivers are kernel-mode, and the user-mode code is out of the IHV’s hand, there is no chance for such optimizations to occur. Because the Direct3D runtime, the user-mode portion that implements the API, cannot have explicit knowledge of the driver’s inner workings, it cannot effectively support marshalling. This means that every D3D call that sends commands to the hardware must perform a kernel-mode switch. This has led to a number of behaviors with regard to using D3D, the most important being the need for submitting large batches of triangles in one function call.
Since OpenGL’s IHV drivers have a user-mode component to them, IHVs have the ability to implement marshalling, thus improving performance. There is still kernel-mode switching, but the theoretical maximum number of switches under OpenGL implementations is simply equal to the Direct3D standard behavior.
Direct3D 10, the release included with Windows Vista, allows portions of drivers to run in user-mode, thus allowing IHVs to implement marshalling, thus bringing the two back into relative performance parity. The Mac OS X OpenGL system implements a very similar system, where IHVs implement a simpler version of the OpenGL API (with both user and kernel mode components), and Apple’s additions to the runtime provide the direct interface to the user code, as well as some basic work to make IHVs’ jobs easier.
So, directX10 allow equal performance... but people using directx9 will have lower performance...
Second thing :
The OpenGL extension mechanism is probably the most heavily disputed difference between the two APIs. OpenGL includes a mechanism where any driver can advertise its own extensions to the API, thus introducing new functionality such as blend modes, new ways of transferring data to the GPU, or different texture wrapping parameters. This allows new functionality to be exposed quickly, but can lead to confusion if different vendors implement similar extensions with different APIs. Many of these extensions are periodically standardized by the OpenGL Architecture Review Board (ARB), and some are made a core part of future OpenGL revisions.
On the other hand, Direct3D is specified by one vendor (Microsoft) only, leading to a more consistent API, but denying access to vendor-specific features. NVIDIA’s UltraShadow technology,[9] for instance, is not available in the stock Direct3D APIs at the time of writing. It should be noted that Direct3D does support texture format extensions (via FourCC). These were once little-known and rarely used, but are now used for DXT texture compression.
When graphics cards added support for pixel shaders (known on OpenGL as “fragment programs”), Direct3D provided a single “Pixel Shader 1.1” (PS1.1) standard which the GeForce 3 and up, and Radeon 8500 and up, claimed compatibility with. Under OpenGL the same functionality was accessed through a variety of custom extensions.
In theory, the Microsoft approach allows a single code path to support both brands of card, whereas under OpenGL the programmer had to write two separate systems. In reality, though, because of the limits on pixel processing of those early cards, Pixel Shader 1.1 was nothing more than a pseudo-assembly language version of the NVIDIA-specific OpenGL extensions. For the most part, the only cards that claimed PS 1.1 functionality were NVIDIA cards, and that is because they were built for it natively. When the Radeon 8500 was released, Microsoft released an update to Direct3D that included Pixel Shader 1.4, which was nothing more than a pseudo-assembly language version of the ATI-specific OpenGL extensions. The only cards that claimed PS 1.4 support were ATI cards because they were designed with the precise hardware necessary to make that functionality happen. In terms of early pixel shaders, Direct3D’s attempt at a single code path fared no better than the OpenGL mechanism.
Fortunately, this situation only existed for a short time under both APIs. Second-generation pixel shading cards were much more similar in functionality, with each architecture evolving towards the same kind of pixel processing conclusion. As such, Pixel Shader 2.0 allowed a unified code path under Direct3D. Around the same time OpenGL introduced its own ARB-approved vertex and pixel shader extensions (GL_ARB_vertex_program and GL_ARB_fragment_program), and both sets of cards supported this standard as well.
Modern card are better with OpenGL because you have not ONE opengl but multiple variation of it... when you install your last/most recent graphic driver on windows, you have the Opengl who is right for your card... for DirectX, you need to wait that Microsoft make a new release for use of the new function...
I have wrote "dissable aero and the 3d thing"... in some case, with the 3D desktop, game are not able to start !!! It is more that a performance hit... now, try Sins in window mode with Aero ( huge performance hit )... in full screen mode, no difference ( because aero is dissabled during the full screen mode !!! )...
Understand me, OpenGl can be better that DirectX but programmation is more complex, and it is only good with high end system... if the graphic card don't support a function, the OpenGL use software render... software render is more slow in OpenGL that DirectX... If you make pro software who run on very high end machine, like pro 3D software, opengl is a good choice... if you make a game for PC, and you wish it to run on low end machine, directx is the right choice... if you make game for several game station and PC, Opengl is again a possible choice since directx is supported only by the Xbox... but opengl is supported by all...
It is not about being faster that something other... it is about the right "global" speed for a specific use... for example, a Ferrari is more fast that a bike... but if the only traject that you make is going to a food shop who is 50 meter of your house, a Ferrari will be slower that a bike seing it in the global view... with the car, the time win on a short traject will be minimal but the time loose for open door, start the motor, find a parking place, etc will be huge...
Simple problem with words... when speak about implementation, i think about API... Wine is not a emulator or API ( implementation ) of DirectX... it is a compatibility layer ( translation ) between directx call in software and opengl in OS...
My statement in these topic are not wrong, in some case they use a poor choice of word... but since english is not my language and that a correct explanation will need a long post like these one, i try to keep it simple ( people don't like post who are kilometer long )... be sure that i know what i speak about... i have a top end computer ( server board ) but i run 4 OS on it... each OS have his own advantage... in some case, a same software is installed on two or more OS... because some function can be more fast in one OS and other function more fast in a other...