1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

FreeFrame SDK

Discussion in 'Developers Area' started by sleepytom, Feb 18, 2005.

  1. sleepytom

    sleepytom VJF Admin

  2. many2

    many2 Active Member

    That's great ! It's really nice to see the freeframe concept growing :)
     
  3. VJamm

    VJamm VJ Software

    The FreeFrame SDK is available for Windows and Mac:

    ... on Windows it is mainly intended for MSVC (Microsoft Visual C++), and on Mac OSX - XCode ... but the SDKs will be useful to anyone developing on those platforms.

    Linux developers should download the linux FreeFrame download module.

    http://sourceforge.net/projects/freeframe/

    Russell
     
  4. levon

    levon Headless Chicken

    just wondering, does anyone compile in Cygwin using make/gcc or LCC???
     
  5. VJamm

    VJamm VJ Software

    I think someone might have been working in GCC trying to compile the X screensaver freeframe port, but generally everyone seems to be on MSVC on windows. GCC seems more appropriate for linux and mac (XCode on the mac is GCC underneath as I understand) - the community of developers in this area is so overwhelmingly MSVC that there comes to be a big advantage in using that environment both in terms of helping each other and sharing code. We already have a lot of very useful code for MSVC and even the plugin wizard in the SDK.

    Having said that FreeFrame aims to be accessible from all environments - the template CPlugins already works in GCC on linux and the mac so you should have no problems getting it to work in GCC.

    VJamm, Resolume and a few others use Delphi

    Russell
     
  6. tonfilm

    tonfilm dev

  7. VJamm

    VJamm VJ Software

    hi tonfilm

    is this loaded up into the sourceforge freeframe downloads system? I recon you should load it in if not. Please contact me if you don't have the appropriate permissions.

    thanks

    Russell
     
  8. Nema

    Nema New Member

    do i understand that correctly that freeframe input and output buffers are still usual system memory (e.g. handled as pointers to system memory)?

    if this assumption is correct, then this means that freeframe plugins don't integrate well into hardware-accelerated (openGL/DirectX) based aplications, because freeframe buffers (even if calculated hardware-accelerated, e.g. using openGL or directX) must always be copied between system memory and gfx memory, which is a mayor bottleneck on any currently available system.

    don't wonder if your freeframe based effects will never really perform in high definition video resolutions, which is well prooved to be possible by several applications using pixelshaders and a "gfx memory only rendering pipeline". for example, just take a look at commercial games...

    at the same time freeframe seems to be much loved and supported by most applications, which makes it a very attractive option for programmers.
     
  9. sleepytom

    sleepytom VJF Admin

    yes thats correct - freeframe was designed to be a very simple CPU based effects system so that it could become a popular cross platform / cross compiler standard. the wisdom of this is born out by the huge range of supported applications now available.

    we would love to see a freeframe openGL version but lack the resources to develop such a thing currently.
     
  10. spark

    spark cut-up, composit, create

    ach, ain't that the rub... can we have a resources fairy please!

    toby
     
  11. t2k

    t2k New Member

    suggestions for the FFGPU architecture....

    class FFGPUContext
    {
    public:
    virtual void GetSize(int &width, int &height) = 0;
    };

    class FFGPUBoundTexture
    {
    public:
    virtual void GetSize(int &width, int &height, int &hardwareWidth, int &hardwareHeight) = 0;
    };

    class FFGLBoundTexture :
    public FFGPUBoundTexture
    {
    public:
    virtual unsigned int GetHandle() = 0;
    virtual unsigned int GetTarget() = 0;
    virtual unsigned int GetTextureUnit() = 0;
    };

    class FFD3DBoundTexture :
    public FFGPUBoundTexture
    {
    public:
    //??
    };

    class FFGPUTexture
    {
    public:
    enum bindFlagEnums
    {
    BF_REQUESTMIPMAPS=1
    };

    virtual FFGPUBoundTextureInfo *Bind(FFGPUContext &gpuc, int bindFlags) = 0;

    virtual void Unbind(FFGPUContext &gpuc) = 0;
    };

    class FFGPUPluginFactory
    {
    enum FFGPUAPI
    {
    API_INVALID=0,
    API_OPENGL=1,
    API_D3D=2
    };

    virtual int IsFFGPUAPISupported(FFGPUAPI api) = 0;

    virtual const char *GetFFGPUName() = 0;
    virtual const char *GetFFGPUDisplayName() = 0;
    virtual int GetFFGPUVersion() = 0;

    virtual FFGPUPlugin *CreateFFGPUPlugin(FFGPUAPI api) = 0;
    };

    class FFGPUPlugin
    {
    public:
    virtual int FFGPURenderFrame(FFGPUTexture &inputTexture, FFGPUContext &gpuc) = 0;

    virtual int FFGPURenderMultipleTextureFrame(FFGPUTexture &*inputTextures, int numTextures, FFGPUContext &gpuc) = 0;
    };

    the .dll's exported function:

    FFGPUPluginFactory *GetFFGPUPluginFactory();
     
  12. t2k

    t2k New Member

    following up on the source code.. this architecture would require that the host software manage the implementation of off screen rendering targets. for plugins in series, the basic procedure would be:

    1) setup a source texture (FFGPUTexture)

    2) setup an offscreen render target and make it the active context

    3) call Plugin A's RenderFrame

    4) setup a second offscreen render target and make it the active context

    5) setup the FFGPUTexture that binds/references the render target that plugin A rendered to

    6) call plugin B's RenderFrame

    trey
     
  13. VJamm

    VJamm VJ Software

    As I understand the situation this kind of architecture would not be cross platform or cross compiler eg. it would only work in the C++ compiler it was made in. That's why we went for the single entry point procedural interface to FreeFrame1 plugins.

    Russell
     
  14. t2k

    t2k New Member

    I have not done any testing with virtual classes compiled with different compilers, but I suppose it is safe to say that they are implemented differently. I'm not sure thats necessarily a problem anymore since most major compilers (even msvc) are available for free now.

    Anyway, here's another try, more like the original freeframe api:

    //plugin exports 1 method, FFGPUPluginMain
    //generic success/failure is returned as 0 or 1,
    //any additional data comes back in methodResults

    typedef int (*FFGPUPluginMain_fpt)(int methodId, void *methodParams, void *methodResults);

    enum FFGPUPluginMethodIds
    {
    FFGPU_GetPluginInfo = 0,
    FFGPU_CreatePluginInstance = 1,
    FFGPU_DeletePluginInstance = 2,
    FFGPU_GLRenderFrame = 3
    };

    enum FFGPUAPI
    {
    FFGPUAPI_OPENGL=1,
    FFGPUAPI_D3D=2
    };

    struct FFGPUPluginInfo
    {
    int FFGPUVersion;
    int PluginVersion;
    int APIBitmask;
    char PluginName[128];
    char DisplayName[32];
    };

    struct FFGLTextureInfo
    {
    int width,height;
    int hardwareWidth,hardwareHeight;
    int handle;
    int target; //GL_TEXTURE_2D, RECT, 3D, etc
    int unit; //0, 1, 2, etc
    };

    typedef int FFGPUPluginHandle;

    struct FFGLRenderFrameParams
    {
    //specifies which instance will handle the call
    FFGPUPluginHandle pluginInstance;

    int numSourceTextures;
    FFGLTextureInfo *sourceTextures;
    int viewportWidth,viewportHeight;
    };

    //////////////////////////////////
    some sample host code for initializing/creating an instance of a plugin:
    ///////////////////////////

    FFGPUPluginMain_fpt pluginMain = GetProcAddress(library, "FFGPUPluginMain");

    FFGPUPluginInfo pluginInfo;

    if (pluginMain(FFGPU_GetPluginInfo, NULL, &pluginInfo))
    {
    //pluginInfo contains valid data

    if (pluginInfo.APIBitmask & FFGPUAPI_OPENGL)
    {
    //ok to use the plugin's GLRender method
    FFGPUPluginInstance pluginInstance = 0;

    if (pluginMain(FFGPU_CreatePluginInstance, (void *)FFGPUAPI_OPENGL, (void *)&pluginInstance))
    {
    //pluginInstance is a valid instance of the plugin
    }
    }
    }

    /////////////////////////
    ///sample code for calling GLRenderFrame
    ///////////////////////

    assume the host is having the plugin render to a frame buffer object or pbuffer object of size
    640x480 and that the active gl context has already been configured to render to the pbo/fbo context

    assume the host has bound a 640x480 texture to texture unit 0. if the card supports arb_non_power_of_2 or rectangle textures, hardwareWidth/Height would match width/height but for the sake of illustrating what will happen on cards that dont support either, hardwareWidth/height is set to the nearest (large enough) power of two.

    FFGLTextureInfo texInfo;
    texInfo.width = 640;
    texInfo.height = 480;
    texInfo.hardwareWidth = 1024;
    texInfo.hardwareHeight = 512;
    texInfo.target = GL_TEXTURE_2D;
    texInfo.unit = 0;

    FFGLRenderFrameParams params;

    //assume pluginInstance was succesfully created
    //as in the above mentioned code
    params.pluginInstance = pluginInstance;

    params.numSourceTextures = 1;
    params.sourceTextures = &texInfo;
    params.viewportWidth = 640;
    params.viewportHeight = 480;

    if (pluginMain(FFGPU_GLRenderFrame, (void *)&params, NULL))
    {
    //render was successful
    }

    ///////////////
    sample code to delete a plugin instance
    /////////////
    pluginMain(FFGPU_DeletePluginInstance, (void *)pluginInstance, NULL);

    /////////////////////////////////
    what I left out:

    no way for a plugin to know what "time" it is. This is important for plugins that need to simulate physical movement

    no way for a plugin to safely manage allocation of additional gl resources. gl will re-use handles of resources that get "lost" because of window closes, running out of memory, etc. if a plugin allocates a resource and that resource is lost, there is a good chance that resources elsewhere in the application have been lost, and handles will be re-issued and re-used in a way that sometimes results in a plugin thinking its resource handle is valid when in fact that handle has been "lost" and re-issued to some other code since the plugin last accessed it. thats a mouthful, but its definitely a real issue i've dealt with in my software.

    not enough specification as to what the gl state should be prior to and after a call to FFGLRenderFrame.
     
  15. t2k

    t2k New Member

    also, obviously i left out all of the mechanisms for plugin parameters - i'm sure there's a lot more, I'm just trying to get a discussion going. =)

    trey
     
  16. t2k

    t2k New Member

    some more code.. an example "pass thru" plugin that might work with msvc (if it was checked for errors, heheh)...

    #include "FFGPU.h"

    FFGPUPluginInfo g_pluginInfo =
    {
    FFGPU_SDK_Version, //ffgpuversion
    1, //plugin version
    FF_GPU_OPENGL, //supported API bitmask
    "PassThru", //full plugin name
    "PassThru" //shorter display name
    };

    int GLRenderPassThru(FFGLRenderFrameParams &params);

    __declspec(dllexport) int FFGPUPluginMain(int methodId, void *params, void *results)
    {
    switch (methodId)
    {
    case FFGPU_GetPluginInfo:
    *((FFGPUPluginInfo *)results) = g_pluginInfo;
    return 1;

    case FFGPU_CreatePluginInstance:
    //all PassThru's can use the same instance handle
    //because there is no per-instance data
    *((FFGPUPluginInstance *)results) = 1;
    return 1;

    case FFGPU_DeletePluginInstance:
    return 1;

    case FFGPU_GLRenderFrame:
    if (params)
    return GLRenderPassThru(*(FFGLRenderFrameParams *)params);
    }

    return 0;
    }

    int GLRenderPassThru(FFGLRenderFrameParams &params)
    {
    //we want to take the source texture and draw it to the
    //viewport so that it fills the viewport completely
    //
    //for now assume standard gl coordinate system where
    //lower left viewport corner is (-1,-1) and upper right
    //viewport corner is (1,1)
    //assume texture/model matrices are identity
    //
    FFGLTextureInfo &t = params.sourceTextures[0];

    //compute texture coordinates
    double texture_max_s = 1.0;
    double texture_max_t = 1.0;

    switch (t.target)
    {
    case GL_TEXTURE_2D:
    texture_max_s = ((double)t.width)/(double)t.hardwareWidth;
    texture_max_t = ((double)t.height)/(double)t.hardwareHeight;
    break;

    case GL_ARB_TEXTURE_RECTANGLE:
    case GL_TEXTURE_RECTANGLE_NV:
    texture_max_s = t.width;
    texture_max_t = t.height;
    break;
    }

    glColor4f(1.0,1.0,1.0,1.0);
    glBegin(GL_QUADS);

    glTexCoord2f(0,0);//lower left corner
    glVertex2f(-1,-1);

    glTexCoord2f(0,texture_max_t);//upper left
    glVertex2f(-1,1);

    glTexCoord2f(texture_max_s, texture_max_t);//upper right
    glVertex2f(1,1);

    glTexCoord2f(texture_max_s, 0);//lower right
    glVertex2f(1,-1);
    glEnd();

    return 1;
    }

    ///////////////////////////////////////////////

    FFGPU spec concerns:

    where does the #include for GL_ARB_TEXTURE_RECTANGLE/GL_TEXTURE_RECTANGLE_NV, and other gl extensions, come from?

    the spec needs to be specific about the state of texture and modelview/projection matrices prior to the GLRenderFrame call, otherwise plugins won't be able to reliably assign vertex/texture coordinates

    its possible that the spec could require that the host have the texture matrix for each texture's texture unit pre-multiplied so
    that tex coords 0,0 and 1,1 always fit the texture perfectly, regardless of the texture target/hardware texture size. it could be nice if plugins didnt have to worry about all of that.

    the spec could stick to the standard gl vertex coordinates (-1,-1 to 1,1) although it is worth considering other options:

    1) pixel based bottom-up (lower left is 0,0 and upper right is viewportWidth, viewportHeight) or top-down (upper left is 0,0 and lower right is viewportWidth,viewportHeight)

    2) normalized 1:1 aspect ratio, wherein the shortest viewport dimension ranges from -1 to 1 and the longer viewport dimension ranges from -(longDimension/shortDimension) to(longDimension/shortDimension). as the window's aspect ratio increases, visible coordinate space decreases

    3) normalized 1:1 aspect ratio "shrink", wherein the longest viewport dimension ranges from -1 to 1 and the shorter viewport dimension ranges from -(shortDimension/longDimension) to (shortdimension/longDimension). same usages implications as (2).
    as the window's aspect ratio increases, visible coordinate space decreases

    2&3) this kind of "normalized" space complicates trivial tasks like stretching a texture over the entire viewport, but simplifies the rendering of aspect ratio correct vector art/3d models.

    a more complete solution might allow the plugin to specify in its FFGPUPluginInfo the kind of coordinate system it wants from the host, picking from the above possibilities

    enum FFGLCoordSystems
    {
    FFGL_STANDARD_COORDS,
    FFGL_ASPECT_NORMALIZED_COORDS_SMALL, FFGL_ASPECT_NORMALIZED_COORDS_LARGE,
    FFGL_BOTTOM_UP_PIXEL_COORDS,
    FFGL_TOP_DOWN_PIXEL_COORDS
    };

    trey
     
  17. sleepytom

    sleepytom VJF Admin

    you seem to of somewhat missunderstood what the point of freeframe is!

    freeframe needs to be an open standard that works across different platforms (mac / windows / linux) and can be integrated into software written in different languages, in different compilers.

    posting reams of MSVC specific C++ doesn't help this develop

    have a look at the VST spec - this was the inspiration for FreeFrame as its a very elegant and simply solution that allows people to program plugins in a wide variety of dev environments.

    FreeFrame 1 has be widely adopted because it is simple and because it works in all the major dev environments. there are over 200 plugins now available programmed in a wide range of languages in a load of different compilers.

    a future hardware freeframe should stick to these principals so it can work on the Mac, Windows and Linux.

    Pete Warden has some very elegant ideas about how to develop freeframe.

    technical discussion for freeframe should probably remain on the freeframe dev list rather than be on VJforums
     
  18. sleepytom

    sleepytom VJF Admin

    possibly the most useful work anyone can do currently for freeframe is to look into funding grants to enable some more time to be spent developing the spec.
     
  19. t2k

    t2k New Member

    Sleepytom,

    In the absense of a magical resources fairy, the only thing that will lead to the creation of a functioning FreeFrameGPU spec is a discussion among competent programmers like me who are willing to donate free time and free code. If you really want to see a FreeFrameGPU spec, you should be more encouraging of attempts like this to get the ball rolling.

    You've made it clear that my efforts are not appreciated here, so unless other developers join the discussion you can rest assured I will not continue it further.

    Trey
    ps. best of luck w/the stick
     
  20. sleepytom

    sleepytom VJF Admin

    yes freframe needs compitient people to create code.

    however before people start to post chunks of code the specification needs to be agreed - there is no point in make a freeframe 2 that will only compile in MSVC - this has been agreed by everyone on the freeframe develop list (which is where the technical discussion of the freeframe spec takes place - most of the core developers seldom read VJforums).

    many of the core freeframe developers aren't using MSVC, a MSVC based plugin architecture would be pretty useless and certainly could not be as widely applied as the current freeframe 1 standard.

    if you wish to take part in the development of freeframe please go to www.freeframe.org and apply to join the development group.
     
  21. t2k

    t2k New Member

    Tom,

    Thanks for all the help, I'd be completely lost without your guidance.

    Trey
     
  22. levon

    levon Headless Chicken

    hey maybe next year if google do the summer of code again someone here might be able to apply for that?
     
  23. Bartholomew

    Bartholomew Resolume VJ Software

    thnx for getting the ffgl ffhw ffgpu (however we want to call it) discussion started again Trey! we have just recently been doing some more experimenting with opengl, pbuffers and glsl together with dave from inside-us-all so we,d love to help work on the next gen ff!
     
  24. Nema

    Nema New Member

    did someone ever think about a C# interface? C# code can be compiled under the platform independent development environment "mono" (but i would prefer .NET studio), and it combines perfectly with GPU accelerated DirectX (and there should be openGL wrappers for linux and mac).
     
  25. Rovastar

    Rovastar /..\

    Surely OpenGL is more logical then DX9 with a GL wrapper. Unless things have improved drastically over the last year. The wrappers had a noticable overhead. There is no real difference in the speed/functions of DX9/GL 2.
     
  26. sleepytom

    sleepytom VJF Admin

    agree that freeframeGL/HD should be openGL based - using directx as a basis for a crossplatform standard makes no sence at all.

    again i'll point out that the freeframe dev list is the place for technical discussions
     
  27. Nema

    Nema New Member

    i fully understand the advantages of OpenGL over DirectX if platform independance is a requirement.

    but one thing might be important to consider: is there a similar way on the OpenGL side to stream and decompress medias (e.g. mpeg) directly into gfx memory and continue processing of the data inside gfx memory, or will it be streamed and decompressed to system memory first, and then be copied to gfx memory (which would again be a mayor bottleneck which we don't have on the directshow/direct3d side)?

    however, i can imagine that freeframe 2 would sacrifice this bit of performance for platform independance.
     
  28. t2k

    t2k New Member

    Nema,

    You raise a good point about an important difference between DX and GL - video streaming. Rovastar, DX does in fact have a better (faster) path for moving compressed video frames (from .avi, .mpg, etc) to texture memory.

    There is something called OpenML that intends to do the same thing for OpenGL, but I think they are not able to move forward as quickly as microsoft can w/their unlimited budget: http://www.khronos.org/openml/

    I first heard about OpenML a year or two ago but still have yet to see any impressive demonstration of video decoding/texturing that significantly out performs basic CPU video decompression and texture uploading. I don't exactly check their site every day though - there may be such technology available now.

    In any case, I dont think the FFGPU architecture will need to address video streams at this level. In my mind I imagine it will process a single frame of video at a time, in the form of a texture object that has already been prepared for it by the host program.


    I have started a discussion today on the freeframe-develop list about the larger FFGPU issues, and I encourage any software developers with an interest in open source hardware accelerated video processing to join in.

    Trey
     
  29. jabah_pureza

    jabah_pureza Jarbas J?come Jr.

    Join!

    I reaaly would like to join in this discussion. How can I???
    I tried to subscribe at sourceforge, but nothing happened.

    Thank you very much.

    j.jR.
     
  30. sleepytom

    sleepytom VJF Admin

    marcus is in charge of freeframe dev list subscriptions - you can mail him at marcusATinnerfieldDOTcoDOTuk
     

Share This Page