I am trying to embed Python in C++ multiplatform framework (JUCE, which targets Windows, OS X, Linux, iOS, Android)

Thanks to Kivy, I can get hold of libpython for each of these five operating systems.

NedBat's talk is a great starting point:
The video is broken, so you can see it here:

Official documentation here:

Now NedBat's talk (and almost every other resource I've found) focuses on writing C/C++ code that #include-s "Python.h" so as to use the Python C-API. And this code gets compiled into a binary (.so on OSX) which needs to be placed in a folder that Python is watching. Then from Python you can do "import foo" and it can load that file in.

However, that's no good to me. I definitely don't want to have to compile separate targets, and then have to worry about my final app's installer having to place in a folder where libpython can find it. Even if I am bundling libpython with my app (which I will do). It is icky.

No, I want to just code my module has C/C++ function and point the Python runtime towards that function.

Nice succinct 'hello world' example that demonstrates this:

I'm trying to figure out how to use PyCXX:

It is difficult to understand the internal structure of this library. And it definitely requires a good understanding of how to do the interfacing using the Python C-API.

i.e. It's difficult to see how to use it, because it's difficult to even figure out what problems it is solving. Someone that has already written a Py<—>C/C++ bridge using the existing Python C-API will be in the right headspace to understand the code and the documentation.

But someone with just Python and C/C++ coding experience is going to have a very hard time of it. <— this is a very useful document written by the original author.

(NOTE: as of yet, no multithreading support, maybe ?)

I have started modifying the source code, as it contains a lot of duplication that (for me at any rate) makes it much harder to make out what is going on.

So most of my work so far involves using macros to reduce the file sizes.

I've been building on OSX, by drag dropping relevant source files into an empty project that has been set to link against libpython, and has Python's include directory in its header search path.

Here I'm going to put up the current things I'm trying to get my head around.

My use case is going to be that my C++ code will need to execute functions in a .py file. And this .py file will need to interact with certain C++ objects (i.e. call functions, access data values, even implement callbacks)

Maybe the right way to do this will be for me to make a Bridge object. The .py does something like 'bridge = Bridge.singleInst' and it can deal exclusively with bridge, e.g.

x = bridge.myVarX
Not quite sure yet how to do callbacks.

Looking at the second listing on

static PyMemberDef Noddy_members[] = {...}
static PyMethodDef Noddy_methods[] = {...}
static PyTypeObject NoddyType = {
    Noddy_methods,             /* tp_methods */
    Noddy_members,             /* tp_members */

Then it gives the module an object of this type:

    PyObject* m;

    if (PyType_Ready(&NoddyType) < 0)
        return NULL;

    m = PyModule_Create(&noddy2module);
    if (m == NULL)
        return NULL;

    PyModule_AddObject(m, "Noddy", (PyObject *)&NoddyType);
    return m;

… It appears that you don't do this directly.
Instead you create a custom Type (a Python Type is roughly equivalent to a C++ class) with attributes and methods. Then you create an object of this type.

import Foo
Foo.Bridge bridge
bridge.MyVar = 7

But PyCXX doesn't contain any PyModule_AddObject

Also searching for tp_members, the only occurrence is:

PythonType::PythonType( size_t basic_size, int itemsize, const char *default_name )
: table( new PyTypeObject )
, sequence_table( NULL )
, mapping_table( NULL )
, number_table( NULL )
, buffer_table( NULL )
    // PyTypeObject is defined in <python-sources>/Include/object.h

    memset( table, 0, sizeof( PyTypeObject ) );   // ensure new fields are 0
    *reinterpret_cast<PyObject *>( table ) = py_object_initializer;
    reinterpret_cast<PyObject *>( table )->ob_type = _Type_Type();
    // QQQ table->ob_size = 0;
    table->tp_name = const_cast<char *>( default_name );
    table->tp_basicsize = basic_size;
    table->tp_itemsize = itemsize;

    // Methods to implement standard operations
    table->tp_dealloc = (destructor)standard_dealloc;
    table->tp_print = 0;
    table->tp_getattr = 0;
    table->tp_setattr = 0;
    table->tp_repr = 0;
    table->tp_members = 0;

Which is looking really strange. I think it uses tp_getAttro and friends instead.
Certainly it does use tp_methods
There is a function for populating a method table and feeding it into tp_methods:

template<TEMPLATE_TYPENAME T> class PythonClass
    : public PythonExtensionBase
        explicit PythonClass( PythonClassInstance *self, Tuple &args, Dict &kwds )
        : PythonExtensionBase()
        , m_class_instance( self )
        { }

        virtual ~PythonClass()
        { }

        static ExtensionClassMethodsTable &methodTable()
            static ExtensionClassMethodsTable *method_table;
            if( method_table == nullptr )
                method_table = new ExtensionClassMethodsTable;
            return *method_table;

        // π!!! Surely we only need to feed the method table to 'behaviors().set_methods(...)' once, upon initialisation
        // because just adding a method won't change the location of this table
        // It seems redundant to keep setting this value to the same memory location each time this function runs
        // Why not set behaviors().set_methods( methodTable() ) during initialisation, and then 
        //     just methodTable().add_method( name, function, flags, doc ) every time we need to add a method? ?
        static void add_method( const char *name, PyCFunction function, int flags, const char *doc=nullptr )
            behaviors().set_methods( methodTable().add_method( name, function, flags, doc ) );

Regarding tp_getAttro— Yes, checking my revised code in cxx_extentions.cxx:

#define SUPP( _Foo_, _statement_ ) \
    PythonType &PythonType::_Foo_() \
    { \
        _statement_ \
        return *this; \

SUPP( supportGetattr      , table->tp_getattr       = getattr_handler; )
SUPP( supportSetattr      , table->tp_setattr       = setattr_handler; )
SUPP( supportGetattro     , table->tp_getattro      = getattro_handler; )
SUPP( supportSetattro     , table->tp_setattro      = setattro_handler; )

ADD_HANDLERo( getattr_handler                   ( S, char *name )                         , REF(getattr(name))   )
ADD_HANDLERi( setattr_handler                   ( S, char *name, PyObject *value )        , p->setattr(name,Py::Object(value))   )
ADD_HANDLERo( getattro_handler                  ( S, PyObject *name )                     , REF( getattro(Py::String(name)) )   )
ADD_HANDLERi( setattro_handler                  ( S, PyObject *name, PyObject *value )    , p->setattro( Py::String(name), Py::Object(value) )   )

Now look at the handlers:

Py::Object PythonExtensionBase::getattro( const Py::String &name )
    return asObject( PyObject_GenericGetAttr( selfPtr(), name.ptr() ) );

int PythonExtensionBase::setattro( const Py::String &name, const Py::Object &value )
    return PyObject_GenericSetAttr( selfPtr(), name.ptr(), value.ptr() );

#define EXTBASE_FUNC( _type_ , _ret_, _func_ , ... ) \
    _type_ PythonExtensionBase::_func_( __VA_ARGS__ ) \
        { \
            throw RuntimeError( "Extension object missing implementation of " #_func_ ); \
            return _ret_; \

EXTBASE_FUNC( Py::Object , Py::None(), getattr              , const char * )
EXTBASE_FUNC( int        , -1        , setattr              , const char *, const Py::Object & )

So, we can see that actually gettattr and setattr have no default support. If they are invoked then (unless the derived class provides some implementation) we will throw a Python error.

And the documentation says that these behaviours are outdated/legacy/obsolete/.. deprecated, and have been superseded by gettattro and setattro which take a Python string object as parameter instead of a string.

And then you see that gettattro and setattro handlers invoke the function that is designed to be hooked into them: PyObject_GenericGetAttr PyObject_GenericSetAttr

Notice …

class new_style_class: public Py::PythonClass< new_style_class >
    static void init_type(void)
    Py::Object getattro( const Py::String &name_ )
        std::string name( name_.as_std_string( "utf-8" ) );

        return ( name == "value" ) ?  m_value : genericGetAttro( name_ );

… That this test class overrides getattro
So this demonstrates how we connect C variables into Python.

Another thing that I'm really struggling with is "call handlers"
i.e. Calling a function on a Python object using the Python C-API
I can see these call handlers in three distinct places:
search project for "method_noargs_call_handler"
(1) find definition in cxx_extensions.cxx
ExtensionModule uses this
So if our C++ extension module has a function, this is how we persuade Python to run it
But also that text gets found in PythonExtension (in ExtensionOldType.hxx)
But ARGH PythonExtension also provides its own definition for method_noargs_call_handler
which looks pretty much identical (wtf)
So I guess this is saying that for any extension type you write, if this extension type contains a function then this is how you persuade Python to run it.
Question: can we collapse both of these method_noargs_call_handler definitions into one? They do look very similar.

But now there is a third set of call handlers, which are defined as macros in ExtensionType.hxx
It looks like the new style extension type gets the job done through these macros (as well as managing attributes through getattro/setattro rather than getattr/setattr)
So at some point someone has rewritten PythonExtension (probably because getattr has been deprecated in favour of getattro). And at the same time they've replaced these call handlers, but why is the macro method preferred?
Do we really need three very similar looking implementations?

Also, it looks like ExtensionModule & PythonExtension implement a method map table with:

typedef std::map<std::string, MethodDefExt<T> *>        method_map_t;

whereas PythonClass rolls its own class (ExtensionClassMethodsTable) <— there appears to be some bug in this demo

Using the object viewer in Xcode, we can see that firstly it makes a few mistakes
It is listing iterators at the baselevel, where's actually they are contained in sequence / map objects
Also it shows certain objects like List and Dict to be at baselevel, even though they are derived from Object
Note that there is ExtensionClassMethodsTable and MethodTable
I think these will be duplicates, one of them needs to go
Note ExtensionModuleBase -> ExtensionModule -> (all of the example extension modules)

PythonExtensionBase -> PythonClass -> new_style_class (which is one of the demos)
^ this is for extension classes, not modules

PythonExtensionBase -> PythonExtension -> old_style_class and a few other demos like range

So we have TWO separate mechanisms for extension classes
TODO: look through them and figure out why it was rewritten and which one to keep and which one to throw

Going back to the method tables, need to find out whether an extension module has its own method table, and whether this is different from an extension class method table
or can we use the same mechanism to both

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License