Hi Marcus,

thanx for the quick response!

Why don't you use GrCreateContext?
well, that was my first choice as well.
But when looking at grx20.h, line 493, you´ll see:

#define GrCreateContext(w,h,m,c) (GrCreateFrameContext(GrCoreFrameMode(),w,h,m,c))

so I decided to use GrCreateFrameContext directly. You´ll see why.

Maybe
  GrCreateFrameContext(GR_FrameRAM8,640,400,NULL,NULL);
will do it without even switching to graphics-mode.
You try and report!
yup.
Thanx, the "GR_FRameRAM8" is much more elegant than a mere "19", naturally.
With this entry (the more or the less elegant one) it is possible to generate
a correct GrContext at all. That´s why I use GrCreateFrameContext.
But the story did not end now.
Now I analysed at what "later" stage I get this before mentioned SigSegv.

Its thrown somewhere in the usage of: GrImageDisplay

GrImage* img_local = GrImageFromContext(mem_context);
GrImageDisplay(x,y, GrImage* img_local);

Maybe I could use a GrBitBlt() for isometric mappings now, but
I will also have to use the scaling capabilities, so this question must
be solved sooner or later anyways.

Maybe a small DDD-graph will help.

greetings,
Joe.