Mac/PC Abstraction, Part 4:
Windows, Mouse, and Keyboard


In this article, I will cover the basic windowing architecture used on MacOS with Carbon. Carbon is an expansive API, but only a small portion of it is needed to create a window suitable for OpenGL rendering. Since I write this as a long-time Windows programmer having to learn to program on the Mac, most of what I will cover will be done in contrast to how things work on Windows.

Nowadays, it appears to be more popular to program Macs using Cocoa and Objective-C. The problem is that they are Mac-specific, much in the same way that .Net and C# are Windows-specific. And as such, they are far from being a convenient means of writing platform-independent code.

To write code that can build and run on more than one platform, your best choice is still C or C++. Interfacing C/C++ code with Objective-C is possible, if you feel like experimenting, but I have had no need to touch Cocoa or Objective-C. All of the interface functionality necessary exists in Carbon.

The sample code discussed in this article is available in It contains a QzTest project that can be built with both DevStudio 7 and Xcode 3. The relevant code is found in Qz/QzMainMac.cpp, which interfaces to the MacOS windowing system. The equivalent Windows code is in Qz/QzMainWin.cpp.

The APIs

There are several different APIs in OSX that can be used for OpenGL rendering, each with different trade-offs.

CGL is the low-level API upon which the other APIs are written. In some regards it is the most flexible, since it is the lowest-level API. However, its biggest drawback may be the inability to render in a window. CGL can only be used for full-screen rendering.
AGL is built on top of CGL and interfaces with MacOS's windowing system, allowing OpenGL rendering in a window. However, AGL's major drawback is that it can only draw in a window. AGL cannot perform full-screen rendering.
Cocoa may be the new hotness from Apple, but it takes the form of a class hierarchy in Objective-C. And while it is possible to interface Objective-C with raw C or C++ code, development and debugging in a multi-language environment can be a real headache. This probably is not an issue if you are familiar with Cocoa development — but if you're already familiar with Cocoa, you're probably not going to learn much from this article.
Since MacOS is implemented on a Linux/Unix platform, it is also possible to expose OpenGL functionality with the X11 SDK. Since X11 windows provide a different look and feel from regular MacOS windows, this tends to be a rarely chosen option. This option will almost certain never earn a blessing from Apple, but if you're already familiar with X11, this can be a quick and dirty route to getting an app running on Macs.

The code I will be discussing in this article was implemented using AGL, since most of the work I've done has needed to render in a window. To write an app that can run in both windowed mode and full-screen means using both AGL and CGL, and toggling between the two APIs. This can be cumbersome, since AGL is built on CGL, which can result in conflicts where functionality is being duplicated between the two APIs, leading to buggy behavior.

For more details about using the different APIs to interface with OpenGL, I recommend OpenGL Programming on Mac OS X, by Robert Kuehne and JD Sullivan. It covers most of the details, although I've found their sample code is often obtuse, requiring a fair bit of study to decipher, which can be frustrating if you're still trying to learn the ins and outs of Mac programming. And in spite of only being a year or two old, the book's sample code (written for OS 10.4) tends to produce deprecation warnings when compiled with the latest versions of Xcode on 10.5 (and it means the code may well not build or run on 10.6). But frankly, of the few Mac programming books I have found, this is the only one that was worth the money.

Basic Structure

Unlike Windows, which creates and closes apps as needed, MacOS likes to start an app and keep it running all of the time, with only one instance of an app existing at a time. Creating new windows or opening existing documents is done by instancing a new window within the same process.

The windowing approach in MacOS bears a semblance to MDI architectures on Windows, but the details are different.

When the app starts, it needs to create a menu bar, which hooks into the Finder bar at the top of the screen. The app also needs to register a callback function which will receive all app-level commands, such as creating a new document, opening an existing one, or terminating the app.

To create a new window, a callback function needs to be registered for that window. Each window has its own private callback handler. Events from the operating system are automatically routed to the appropriate callback function.

Unlike Windows, there is no concept of a message pump. Message dispatching is handled by the OS. The app does not need to query for messages or run a message loop. The OS will issue callbacks to the app whenever it needs to deliver a message (or event, to use the Carbon term).

Given this behavior, an object-oriented approach would be to define a class for app-level operations, and a separate class for handling the window. The app class is responsible for creating new window objects and tracking what windows exist. Each window object is associated with a single Carbon window, receiving only the events that pertain to that window (such as mouse and keyboard input).

In the sample project, the window interface code is broken into two classes, QzMainApp (for app-level events) and QzMainWin (for controlling a single window).

All documents that are open for a specific app exist within the same process, therefore extra care is required about accessing resources. Long time Windows programmers (ahem) have a habit of using globals to help interface with message dispatching, especially when directly using the Win32 calls to interface with the OS (as opposed to MFC or some other abstraction). Safely writing code on the Mac means that global objects (including singletons, factories, et al.) must be restricted or eliminated completely to avoid having conflicts between multiple windows.

Granted, avoiding the use of globals is good design, but good design and shipping code are not always synonymous.

Creating the App

To create a windowed app that uses Carbon events, the following sequence of operations is required:

Create a Menu Bar
The app needs a menu bar that will be displayed across the top of the screen, as part of the Finder. Although this menu bar can be created functionally, it is usually easier to define a menu resource that is stored in the NIB file. At the very least, this needs to add a "Quit" option.
Install an Event Handler
An event handler is needed to receive the app-level Carbon events. This is a plain C function pointer that will receive event callbacks. This callback will have to handle events for creating new windows, opening existing files, and terminating the app. Many of these events can be processed by the default handlers built into the system, but you still need to override the new and open events to create windows.
Call RunApplicationEventLoop()
Once all of the set-up is completed, call the RunApplicationEventLoop function. This event loop interfaces with Carbon, and will handle sending events to the event handler. This function does not return until the app terminates.

Once the app drops into the main event loop, an icon for it will appear on the Dock. When the app has the input focus, its custom menu bar will appear on the Finder.

Apps written for Windows are often designed so that multiple instances of the app can run as separate processes, each process displaying a different file or document. However, when writing apps for the Mac, only one instance of the app exists at one time. If multiple documents are open, they all exist within the same process space. Extra caution is therefore needed to isolate the memory structures for one document from those for another document.

Or you could simply ignore this and prohibit more than one window from being open at one time, but this is not generally a user-friendly approach (unless you're doing something like writing a game). This is, however, exactly the approach the test app uses, since this keeps the implementation of QzMainApp simpler. This code is intended for demonstration purposes, so I kept the implementation fairly minimal.

The following chunk of code demonstrates the minimal steps needed to create a new app, display its menu bar, and hook in a callback function to receive events from the OS:

// List of app-level events that are handled by QzMainApp.
static const EventTypeSpec g_AppEvents[] =
    { kEventClassCommand,     kEventCommandProcess },
    { kEventClassApplication, kEventAppActivated },
    { kEventClassApplication, kEventAppDeactivated },
    { kEventClassApplication, kEventAppHidden },
    { kEventClassApplication, kEventAppShown },

OSStatus QzMainApp::LaunchApp(void)
    OSStatus result;
    IBNibRef hNIB;

    // Create a NIB reference, which is needed to access the definitions
    // within the NIB file.  Give it the name of the NIB file, without the
    // .nib file extension.  CreateNibReference() will search for this NIB
    // within the application bundle.  Note that the NIB will have .xib as
    // its file extension within the project file.  The actual .nib file is
    // created when the project is built.
    result = CreateNibReference(CFSTR("main"), &hNIB);
    if (noErr != result) {
        return result;

    // Once the NIB reference is created, set the menu bar.  "MenuBar" is
    // the name of the menu bar object, as defined in the NIB.  This name
    // is set in InterfaceBuilder when the nib is created.
    result = SetMenuBarFromNib(hNIB, CFSTR("MenuBar"));

    // Dispose of the NIB when we're done with it to avoid resource leaks.
    // Do this here before error checking to assure that this will always
    // be done, regardless of whether the previous function succeeded.

    if (noErr != result) {
        return result;

    // Install the handler that will receive app-level events.
    InstallEventHandler(GetApplicationEventTarget(), StubAppEvent,
            GetEventTypeCount(g_AppEvents), g_AppEvents, this, NULL);

    return noErr;

After the above code has been executed, call RunApplicationEventLoop. That will hand control of the app over to Carbon, which will periodically notify the app of events through the StubAppEvent callback function. The demo code uses the QzMainApp class to encapsulate all of the app-level support code, so StubAppEvent only needs to recast the context pointer back to a QzMainApp* pointer, then forward the call into the actual event handler method.

static OSStatus QzMainApp::StubAppEvent(EventHandlerCallRef hCaller,
                                        EventRef hEvent,
                                        void *pContext)
    return reinterpret_cast<QzMainApp*>(pContext)->HandleEvent(hCaller, hEvent);

OSStatus QzMainApp::HandleEvent(EventHandlerCallRef hCaller, EventRef hEvent)
    switch (GetEventClass(hEvent)) {
        case kEventClassCommand:
            return HandleEventClassCommand(hEvent);

        case kEventClassApplication:
            return HandleEventClassApp(hEvent);

    // Always return eventNotHandledErr if we don't handle the event.
    // Assuming that the events were registered correctly, the only events
    // we should ever see are the ones we requested, so we should not ever
    // get to this point.
    return eventNotHandledErr;

Creating a Window

Whenever the app needs to create a new document or open an existing file, it needs to create a window. All windows need to coexist at the same time, in the same address space, and must avoid corrupting one another's states.

To create a new window, you perform the following sequence of operations:

Create a New Window
A new window can be created manually, or from a resource stored in a NIB. A properly working window needs to have standard event handlers installed, and the most reliable way I have found for windows to work correctly is to create them explicitly with CreateNewWindow. Windows defined in NIBs tend to be flaky for reasons I have yet to divine, which prevents the standard event handlers from being properly installed.
Install an Event Handler
Each window needs an event handler. This is a plain C callback function, which will receive events whenever the window is resized, moved, activated, etc., as well as for delivering mouse and keyboard events.
Create an AGL Context
Assuming you are going to be using OpenGL for rendering (which is the only rendering approach I use on Macs), you will need an AGL context that interfaces with the OpenGL renderer.

// List of window-level events that are handled by QzMainWin.
static const EventTypeSpec g_WindowEvents[] =
    { kEventClassCommand,   kEventCommandProcess },
    { kEventClassWindow,    kEventWindowActivated },
    { kEventClassWindow,    kEventWindowDeactivated },

OSStatus QzMainWin::LaunchWindow(void)
    // Caution when using the Rect struct: the components are arranged as
    // top/left/bottom/right, which is different from Windows.  Trying to
    // automatically init it with values between { and } will result in the
    // horizontal and vertical values being swapped if you're not paying
    // attention to the convention.
    Rect windowRect;
    windowRect.left   = 0;
    windowRect.right  = m_WindowWidth;    = 0;
    windowRect.bottom = m_WindowHeight;

    GLint attribs[] =
        AGL_DEPTH_SIZE, 32,
    // Create the OpenGL render context that will
    // be used for drawing within this window.
    AGLPixelFormat pixelFormat = aglChoosePixelFormat(NULL, 1, attribs);
    m_AglContext = aglCreateContext(pixelFormat, NULL);

    // These flags control the look of the window, its border,
    // shadows behind the window, etc.
    WindowAttributes attribFlags = kWindowStandardDocumentAttributes
                                 | kWindowStandardHandlerAttribute
                                 | kWindowResizableAttribute
                                 | kWindowLiveResizeAttribute
                                 | kWindowNoShadowAttribute
                                 | kWindowCloseBoxAttribute;

    // Create the window that will be used for drawing.  It is possible
    // to create this window using CreateWindowFromNib(), but that
    // requires a window to be defined in the NIB, and that window must
    // be flagged to have the standard handlers loaded.  If the standard
    // handlers are not loaded, not all events will be processed, even
    // if window-specific handlers are loaded.  An example of this is
    // mouse events: these events will never be received if the standard
    // handlers are not loaded.  Either create a window that explicitly
    // has kWindowStandardHandlerAttribute defined, or make certain that
    // the window definition in the NIB has this flag set.
    CreateNewWindow(kDocumentWindowClass, attribFlags, &windowRect, &m_hWindow);
    m_EventHandlerUPP = NewEventHandlerUPP(StubWindowEvent);

    // Install the event handler for the window.  This will receive all of
    // the window-level events, such as cut/copy/paste requests along with
    // mouse and keyboard input.
    InstallEventHandler(GetWindowEventTarget(m_hWindow), m_EventHandlerUPP,
            GetEventTypeCount(g_WindowEvents), g_WindowEvents, this,

    // Define a timer that will wake up the app every few milliseconds
    // to prompt the window to render the next frame with OpenGL.
    m_TimerUPP = NewEventLoopTimerUPP(StubTimer);
    EventTimerInterval delay = kEventDurationSecond / c_TargetFPS;
    InstallEventLoopTimer(GetMainEventLoop(), delay, delay, m_TimerUPP,
            this, &m_hTimer); 

    // Position the window in the middle of the screen.
    RepositionWindow(m_hWindow, NULL, kWindowCenterOnMainScreen);
    // Windows are created in the hidden state,
    // so we need to explicitly make them visible.

    // Assign the OpenGL context to this window
    // so we will be able to draw to it.
    aglSetWindowRef(m_AglContext, m_hWindow);

    // Request the bounds of the current window.  This is important since
    // the window that gets created will be larger than requested.  Extra
    // space is reserved at the top of the window for the menu bar (22 lines
    // on the system used for testing).  Getting the bounds will tell us
    // what the actual window size is, and the relative position of the
    // upper left corner, which needs to be subtracted from mouse coords
    // to map the mouse to the correct position within the window.
    HIWindowGetBounds(m_hWindow, kWindowContentRgn, kHICoordSpaceScreenPixel,
    HIWindowGetBounds(m_hWindow, kWindowContentRgn, kHICoordSpaceWindow,
    m_WindowWidth  = QzFloatToInt(m_OutBounds.size.width);
    m_WindowHeight = QzFloatToInt(m_OutBounds.size.height);

    // The window is now ready for OpenGL rendering.    

    return noErr;

As was the case for app-level events, the window class provides a static class method for the callback function. This stub function recasts the context pointer, forwarding the function call into the window object.

static OSStatus QzMainWin::StubWindowEvent(EventHandlerCallRef hCaller,
                                           EventRef hEvent,
                                           void *pContext)
    return reinterpret_cast<QzMainWin*>(pContext)->HandleEvent(hCaller, hEvent);

OSStatus QzMainWin::HandleEvent(EventHandlerCallRef hCaller, EventRef hEvent)
    switch (GetEventClass(hEvent)) {
        case kEventClassCommand:
            return HandleEventClassCommand(hEvent);

    // Always return eventNotHandledErr if we don't handle the event.
    // Assuming that the events were registered correctly, the only events
    // we should ever see are the ones we requested, so we should not ever
    // get to this point.
    return eventNotHandledErr;

There is no one true way to control the frame rate of a graphics app on Macs. The most common technique is to register a timer, which will wake up the app every few milliseconds to render the screen. All this timer needs to do is call InvalWindowRect. This will cause the OS to send a kEventWindowDrawContent event to the window, which is what will be used to render the window.

static pascal void QzMainWin::StubTimer(EventLoopTimerRef hTimer,
                                        void *pContext)

void QzMainWin::HandleTimer(EventLoopTimerRef hTimer)
    // Only force the window to redraw itself if it is visible.
    // This prevents minimized windows from wasting system resources.
    if (IsWindowActive(m_hWindow)) {
        Rect rect;

        // Invalidate the window using window coordinates.
        rect.left   = 0;
        rect.right  = m_OutBounds.origin.x + m_OutBounds.size.width;    = 0;
        rect.bottom = m_OutBounds.origin.y + m_OutBounds.size.height;

        InvalWindowRect(m_hWindow, &rect);

Carbon Events

Events have three levels of info: class, event, and parameter.

The class provides a general indication of what the event represents: an app-level command, a window-level command, mouse input, keyboard input, etc. The class type is extracted from the event using the GetEventClass function. You don't necessarily need to look at the event class, but doing so allows the code to switch on the class type, splitting up the message handling into seperate functions based on class instead of having one long switch that handles every possible type of event.
The event type itself indicates what the message contains: window resized, window deactivated, mouse button click, etc. The event type is extracted using GetEventKind.
Each event contains several parameters. The parameters available depend on the type of event. If the window was moved, the parameter data will contain the new window position. For a key press, the parameter data indicates which key was pressed and which modifier keys were being held down. The parameter data is accessed using GetEventParameter.

The default handling code set up by Xcode's wizards frequently nests these three levels together, so seeing three nested switch statements is commong. And, as far as I'm concerted, hard to read. The example code in splits the events up into separate handler functions for each class.

One important point must be kept in mind when writing event handlers: the handler code must return the value eventNotHandledErr in order for the standard event handlers to also process the code. Returning noErr or any other error code will prevent the standard event handlers from being executed. Only return noErr when you want to explicitly prevent the standard handlers from processing a specific event.

Keyboard Input

Keyboard input through Carbon is divided into two categories: raw and translated.

Raw input is indicated by the kEventRawKeyDown and kEventRawKeyUp events. These events specify that a physical key has been pressed or released, allowing the app to track the state of keys and keep an internal queue of key events (useful for keyboard control in games). Raw input reports both a key code (accessed with kEventParamKeyCode) and a character value (in kEventParamKeyMacCharCodes).

However, raw input suffers from two problems: no standard virtual key codes and no input from modifier keys.

Unlike Windows, which uses a fixed set of virtual key codes for low-level input, each Apple keyboard product line uses different raw key codes. The key codes for two different Apple keyboards may be identical, slightly different, or radically different. There is no reliable way to determine what physical keyboard is being used, or what set of language-specific symbols are etched into the keys (QWERTY vs. AZERTY, dead keys for accent, etc.). If you need the raw key input state for some reason (game developers needing exact key state information, for example), you will have to keep in mind that the raw key values on your development systems will be incompatible with other keyboards being used by customers.

The other problem with raw input is that no events are received for the modifier keys: shift, alt, control, command, etc. Since no events are received for these key presses, the only way to determine whether modifier keys are being held down is by checking the modifier flags on the other key events that are received. This prevents these keys from being used for gameplay control operations, such as run, stealth, or highlight operations common in many games.

The other category of key events are indicate by kEventTextInputUnicodeForKeyEvent events. These events have been internally translated by the OS, applying any dead key pressed for accent marks, case due to caps lock and shift keys, and mapped according to the physical layout of the keyboard (QWERTY, AZERTY, or other language-specific arrangements). The contents of the event will be a single symbol, but that symbol is represented in UTF-8 and may be composed of multiple glyphs.

This translated form of input is useful for receiving typed input for a dialog box or other edit control. Given the translated nature of the input, it will often be useless for tracking keyboard state since a different Unicode symbol is used for each variation of a letter (upper vs. lower case, every accent mark that can be applied, and in some cases multiple accents can be applied to a single character).

Is there a better way to get key state information, in a way that is consistent for all myriad Mac product lines? I'm still trying to find and answer to this for myself (and judging from internet forums, I am far from alone).

Mouse Input

Mouse input is fairly straightforward. However, one special case merits attention for anyone planning on locking the mouse. Windows allows the mouse pointer to be locked to a specific window. This is often done while the mouse pointer is down, which assures that the app will always receive the "mouse up" event, even if the mouse pointer is outside the bounds of the window.

However, MacOS does not support this type of operation. It is possible to hide the mouse pointer using the functions ShowCursor and HideCursor, but there is no way to prevent the mouse pointer from moving over another window.

To emulate this functionality, the program needs to trap all mouse move events and issue a CGWarpMouseCursorPosition function call to warp the mouse pointer back to some neutral position (e.g., the position of the mouse when it clicked in the window). CGWarpMouseCursorPosition does not produce kEventMouseDragged or kEventMouseMoved events, so it will not result in recursive event dispatching problems.

However, it does tend to make the mouse pointer bounce around within a small area around the warp position. It also prevents the mouse pointer from tracking to whatever position the user expects based on their movement of the mouse. So it is best to call HideCursor to make the pointer disappear while doing this.

void QzMainWin::UpdateMousePosition(EventRef hEvent)
    // The normal case is when the mouse is not locked to the window.
    // Here we just need to forward the new window-relative position
    // of the mouse into the manager.
    if (false == QzIsMouseLocked()) {
        // Warning: This struct uses floats.
        HIPoint position;

        OSStatus result = GetEventParameter(hEvent, kEventParamWindowMouseLocation,
                          typeHIPoint, NULL, sizeof(position), NULL, &position);

        if (0 == result) {
            // Convert the mouse position to be relative to the upper-left
            // corner of the screen.
            S32 newX = QzFloatToInt(position.x);
            S32 newY = QzFloatToInt(position.y - m_OutBounds.origin.y);

            // Ignore the event if the mouse's position has not changed.
            if ((newX != m_MousePosX) || (newY != m_MousePosY)) {
                m_MousePosX = newX;
                m_MousePosY = newY;

                if (NULL != m_pManager) {
                    m_pManager->MouseMove(m_MousePosX, m_MousePosY);
    // Otherwise, the mouse is locked.  In this state, the mouse is hidden
    // and forced to stay within the bounds of the window, and the mouse
    // changes are sent in as delta values instead of absolute coordinates.
    // This mode is useful for spinning 3D objects, or when rendering from
    // a first-person point of view.
    else {
        // Warning: This struct uses floats.
        HIPoint delta;

        OSStatus result = GetEventParameter(hEvent, kEventParamMouseDelta,
                          typeHIPoint, NULL, sizeof(delta), NULL, &delta);

        if (0 == result) {
            // To keep the mouse pointer within the bounds of the window, we
            // warp it back to the previous position.  However, warping is
            // done in display coordinates, not window coordinates, so we
            // have to map the window coordinates of the events into display
            // coordinates before we can call the CG routines. 

            CGPoint warp;
            warp.x = m_MousePosX + m_WindowPos.origin.x;
            warp.y = m_MousePosY + m_WindowPos.origin.y;

            if (NULL != m_pManager) {

For example purposes, the demo app QzTest can toggle the mouse pointer to and from a locked state by pressing the "1" key. When the mouse pointer is locked, the pointer is hidden and a small quad is drawn to indicate the current locked position. The mouser pointer itself never moves: the cumulative delta values are used to control the locked position.

Note that mouse deltas include mouse acceleration. There does not appear to be any programmatic way to neutralize the acceleration, or access the raw, unaccelerated mouse input. (If there is a way, I'd love to know what it is. The acceleration problem makes game programming a real PITA.)

Accessing mouse wheel events is straightforward, with only a couple minor points of consideration. First, the mouse wheel events allow for 2D trackball logic, so you need to distinguish between X-axis and Y-axis events. Second, the delta values are discreet ticks, as opposed to Windows where the wheel delta values have been multiplied by 120 at the OS level (ostensibly to allow for more precise mice in the future, though I've never seen a wheel mouse that did it).

void QzMainWin::UpdateMouseWheel(EventRef hEvent)
    EventMouseWheelAxis axis;
    OSStatus result = GetEventParameter(hEvent, kEventParamMouseWheelAxis,
            typeMouseWheelAxis, NULL, sizeof(axis), NULL, &axis); 

    if (0 != result) {

    // The mouse wheel interface supports track-ball logic, so wheel events
    // can occur in both the X and Y axis.  Normal mice only have a Y-axis,
    // so ignore X-axis events.
    if (kEventMouseWheelAxisY == axis) {
        SInt32 wheelDelta;
        result = GetEventParameter(hEvent, kEventParamMouseWheelDelta,
                typeSInt32, NULL, sizeof(wheelDelta), NULL, &wheelDelta); 

        if (0 != result) {

        if (NULL != m_pManager) {
            // For Microsoftian compatibility, multiply the amount of wheel
            // motion by a magic number.  This allows MouseWheel() to process
            // values in the same numerical range on all platforms.
            m_pManager->MouseWheel(wheelDelta * QzMouseWheel_Delta);

And a final word of warning: pay close attention to the origin for mouse and window coordinates. Some Carbon functions provide coordinates relative to the lower-level corner of a window, while other functions use coordinates relative to the upper-left. Some coordinates are relative to the window, and others are absolute screen positions.

NIBs and Projects

A NIB file is analogous to resources in a Win32 app. The difference is that Window app resources are stored in the EXE file. NIB resources are compiled into a separate binary file that is stored in the app bundle. The bundle itself is something akin to a ZIP file, which contains the executable (possibly several different executables for different platforms and different versions of MacOS), along with any other resource files that are required for the app to run.

Something similar is done with xproj files. These are actually directories (and will show up as such if if a project is zipped up and moved over to a Windows machine), but Finder will not allow users to see the contents of those directories. However, it is possible to cd into these directories from a terminal window, which is useful when you need to edit the pbxproj file by hand.

Generally, you can access most of the project settings from the GUI in Xcode. However, in my experience, some settings can only be changed by manually editing the pbxproj file. One example is pre-compiled headers: these settings never show up properly in the Xcode GUI. They always appear to be disabled, even when enabled in the project file. Every time I create a new project, I have to manually edit the project file to get turn these off.