Solution for line-by-line .MC "conditional compilation"

Here is my rough solution for "conditional compilation" of .MC files, in case anyone is interested. It should work on Linux, macOS, and Windows 10 (if you install Ubuntu/bash).

It's actually just a basic bash script file, along with a text file where you define:
  • A number of "targets" (e.g. CIQ1 vs CIQ2 vs CIQ3)
  • The annotations to be applied to each target
  • The C-style macros to be used with #if, #ifdef, and #ifndef so we can enjoy our archaic line-by-line source code inclusion/exclusion in Connect IQ.


In the example below, assume that the following annotations have already been set up to be excluded by the appropriate targets, in the jungle file:
:round - all round watches
:semiround - all semiround watches
:ciq2 - all watches currently supporting ciq2 (but not ciq3)
:ciq3 - all watches currently supporting ciq3

In the example, the class to be conditionally compiled has a whopping 3 features. There are two build targets:
A) Round CIQ3 watches
B) Semiround CIQ2 watches

Target A supports features 1 and 2.
Target B supports features 2 and 3.

Explanation:
  • generate.sh looks for files named *.src (and *.mc-src) in the current folder. It creates a new subfolder called autogen/ and places modified versions of the source files based on the defined targets. All comments and whitespace are preserved
    • I recommend using the extension .mc-src for your "unprocessed MC files". That way you can open *.mc-src in the Monkey C Eclipse editor and still have syntax highlighting, etc.

  • All files in the subfolder called autogen/ will be deleted when this script runs, so make sure you don't put anything else in there
  • generate.sh uses the target definition file (generate.targets) that you provide, to create the modified source files:


generate.targets (example)
targetA|:round,:ciq3|FEATURE_ONE,FEATURE_TWO
targetB|:semiround,:ciq2|FEATURE_TWO,FEATURE_THREE
global||GLOBAL_SYMBOLS



The first target in this example is
"targetA|:round,:ciq3|FEATURE_ONE,FEATURE_TWO"
  • targetA is just a descriptive name for your target that will be used to generate the modified source file.
  • :round,:ciq3 is the list of annotations that apply to the given target. The C-style define BUILD_TARGET will be set to ":round :ciq3" in this case. All unannotated global symbols and class definitions should have "(BUILD_TARGET)" in front of them.
  • Global symbols and class definitions which already have annotations should be wrapped in a #IFDEF/ENDIF block with a macro that is defined once (e.g. GLOBAL_SYMBOLS)
    In the above example, the "global" target is defined to handle global symbols and classes that are already annotated.
  • FEATURE_ONE,FEATURE_TWO is the list of C-style macros which will be applied to given target.


generate.sh
Bash script file. (Tested in Ubuntu for Windows 10. Should work in Linux, and it should work macOS as it doesn't require any features from Bash 4 afaik).
#!/bin/bash

#prereqs:
# bash
# gcc

#stackoverflow.com/.../how-do-i-split-a-string-on-a-delimiter-in-bash
#stackoverflow.com/.../how-can-i-join-elements-of-an-array-in-bash
#stackoverflow.com/.../read-lines-from-a-file-into-a-bash-array

function join_by { local IFS="$1"; shift; echo "$*"; }

# Read targets from file
IFS=$'\r\n' GLOBIGNORE='*' command eval 'TARGETS=($(cat generate.targets))'

mkdir -p autogen
rm -f autogen/*

shopt -s nullglob

# You can add more extensions besides "*.src" here.
# For example, you may wish to use .mc-src as a unique extension
# for MC source template files, so that you can open *.mc-src files
# in the Monkey C editor, in Eclipse.
for srcfile in *.src *.mc-src; do
#for srcfile in *.src ; do
echo Source file: $srcfile
echo ""
for TARGET in ${TARGETS[@]}; do
unset TARGET_ARRAY
unset ANNOTATIONS_ARRAY
unset DEFINES_ARRAY
IFS="|" read -ra TARGET_ARRAY <<< "$TARGET"
echo Target name: ${TARGET_ARRAY[0]}

IFS="," read -ra ANNOTATIONS_ARRAY <<< "${TARGET_ARRAY[1]}"
IFS="," read -ra DEFINES_ARRAY <<< "${TARGET_ARRAY[2]}"

echo Target annotations: ${ANNOTATIONS_ARRAY[@]}
echo Target defines: ${DEFINES_ARRAY[@]}

BUILD_TARGET_DEFINE=$(join_by " " ${ANNOTATIONS_ARRAY[@]})

unset DEFINES_ARGS_ARRAY
count=0;
for DEFINE in ${DEFINES_ARRAY[@]}; do
DEFINES_ARGS_ARRAY[count]="-D$DEFINE"
count=$(( $count + 1 ))
done

# Source file: MYFILE.mc.src
# Output file: autogen/MYFILE.TARGETNAME.mc
filename=$srcfile
extension="${filename##*.}"
filename="${filename%.*}"
extension="${filename##*.}"
filename="${filename%.*}"
DESTFILE=autogen/$filename.${TARGET_ARRAY[0]}.$extension

echo Writing $DESTFILE
set -x
gcc -nostdinc -traditional-cpp -C -x c -P -E "-DBUILD_TARGET=$BUILD_TARGET_DEFINE" ${DEFINES_ARGS_ARRAY[@]} "$srcfile" > "$DESTFILE"

set +x
echo ""
done
echo "==================================="
echo ""
done


Example MC.SRC file that implements 2 of 3 features, depending on your target.
myClass.mc.src
or
myClass.mc.mc-src
#ifdef GLOBAL_SYMBOLS // prevent dupe symbol definitions
(:ciq2) const isCiq3 = false;
(:ciq3) const isCiq3 = true;

#else // ifndef GLOBAL_SYMBOLS

(BUILD_TARGET) class myClass
{
function initialize()
{
#ifdef FEATURE_ONE
System.println("I support feature one."); // comment 1
#endif
#ifdef FEATURE_TWO
System.println("Feature two is kinda cool, I guess."); // comment 2
#endif
#ifdef FEATURE_THREE
System.println("A third feature is nice to have."); // comment 3
#endif
}
}

#endif // ifndef GLOBAL_SYMBOLS


Generated output:

myClass.global.mc
(:ciq2) const isCiq3 = false;
(:ciq3) const isCiq3 = true;


myClass.targetA.mc
(:round :ciq3) class myClass
{
function initialize()
{
System.println("I support feature one."); // comment 1
System.println("Feature two is kinda cool, I guess."); // comment 2
}
}


myClass.targetB.mc
(:semiround :ciq2) class myClass
{
function initialize()
{
System.println("Feature two is kinda cool, I guess."); // comment 2
System.println("A third feature is nice to have."); // comment 3
}
}
  • Do you have a sample of this in use somewhere? Looks like you're using the GCC preprocessor to handle the inclusion/exclusion of sections but I'm interested in how you've integrated this into the build/debug pipeline for the SDK (if you have).

  • Yeah I use an external tool builder rule in eclipse. I enable auto-build, and whenever I modify a source template in my project, corresponding output files for all targets are generated.

    Here's a couple of snippets.

    .project:

    <?xml version="1.0" encoding="UTF-8"?>
    <projectDescription>
    	<name>MyProject</name>
    	<comment></comment>
    	<projects>
    	</projects>
    	<buildSpec>
    		<buildCommand>
    			<name>org.eclipse.ui.externaltools.ExternalToolBuilder</name>
    			<triggers>auto,full,incremental,</triggers>
    			<arguments>
    				<dictionary>
    					<key>LaunchConfigHandle</key>
    					<value><project>/.externalToolBuilders/Generate source.launch</value>
    				</dictionary>
    			</arguments>
    		</buildCommand>
    		<buildCommand>
    			<name>connectiq.builder</name>
    			<arguments>
    			</arguments>
    		</buildCommand>
    	</buildSpec>
    	<natures>
    		<nature>connectiq.projectNature</nature>
    	</natures>
    	<linkedResources>
        ...
    	</linkedResources>
    </projectDescription>
    

    ".externalToolBuilders/Generate Source.launch"

    (This is specific to both Windows and the internal folder structure of my project)

    (The files named ".mc-src" are the conditionally compiled ones)

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <launchConfiguration type="org.eclipse.ui.externaltools.ProgramBuilderLaunchConfigurationType">
    <stringAttribute key="org.eclipse.debug.core.ATTR_REFRESH_SCOPE" value="${working_set:<?xml version="1.0" encoding="UTF-8"?>
    <resources>
    <item path="/MyProject/core/source" type="2"/>
    </resources>}"/>
    <booleanAttribute key="org.eclipse.debug.ui.ATTR_LAUNCH_IN_BACKGROUND" value="false"/>
    <stringAttribute key="org.eclipse.ui.externaltools.ATTR_BUILD_SCOPE" value="${working_set:<?xml version="1.0" encoding="UTF-8"?>
    <resources>
    <item path="/MyProject/core/source/File1.mc.mc-src" type="1"/>
    <item path="/MyProject/core/source/File2.mc.mc-src" type="1"/>
    <item path="/MyProject/core/source/File3.mc.mc-src" type="1"/>
    <item path="/MyProject/core/source/File4.mc" type="1"/>
    <item path="/MyProject/core/source/File5.mc" type="1"/>
    <item path="/MyProject/core/source/File6.mc.mc-src" type="1"/>
    <item path="/MyProject/core/source/generate.sh" type="1"/>
    <item path="/MyProject/core/source/generate.targets" type="1"/>
    </resources>}"/>
    <stringAttribute key="org.eclipse.ui.externaltools.ATTR_LOCATION" value="c:\windows\system32\wsl.exe"/>
    <stringAttribute key="org.eclipse.ui.externaltools.ATTR_RUN_BUILD_KINDS" value="full,incremental,auto,"/>
    <stringAttribute key="org.eclipse.ui.externaltools.ATTR_TOOL_ARGUMENTS" value="cd `wslpath2 '${workspace_loc:/MyProject/core/source/}'` && ./generate.sh"/>
    <booleanAttribute key="org.eclipse.ui.externaltools.ATTR_TRIGGERS_CONFIGURED" value="true"/>
    </launchConfiguration>

  • wslpath2 is a script is used to replace wslpath, which had a bug which prevented it from working on removable drives. It may not be necessary anymore, and if you develop off your internal HDD/SSD, it can be replaced with wslpath.

    Save the following paste as wslpath2, place in path, and install php.

    https://pastebin.com/707TsE09

  • Since the forum won't let me edit my original post, the motivation is to save memory, especially for data fields and especially for CIQ1 devices. Even many modern devices only allow 32 KB for data fields.

    In contrast, using annotations, you may have to use the following patterns which waste a bit of memory:

    - Empty functions

    - if...else statements with "has"

    - unnecessary use of classes/inheritance

    EDIT: Sorry to beat a dead horse, but I really wish Garmin had chosen NodeBB or even Discourse for the "new" forum platform.

    EDIT: I still make heavy use of annotations (to avoid having completely unmaintainable code), but I combine them with conditional compilation to squeeze as much memory as possible out of certain devices, for certain apps.

  • This is all very helpful. I don't typically use Eclipse so I'm going to have to spend a little time working out the build steps but this will be useful. Thanks!

  • Interesting solution to the problem!

    I had to solve a similar problem back in 2015 / 2016 for Note2Watch with the main challenge being how to build a widget and watch-app from the same code base. Lots of little things had to change all over the place and I'm not sure annotations existed at the time, otherwise I think I might have gone with something more like your approach.

    Instead I went with the most brain-dead, cave-man approach I could think of: programmatically commenting out the code in-place that I wasn't using.

    The way this worked is you wrapped certain lines in C style #ifdef / #endif lines (slightly different syntax) and then a Python script would parse the source files and perform the necessary comment/uncomment fix-ups. Then all I had to do was use Makefile targets to execute this fix-up script so I could write something like:

    make widget

    make watch-app


    Eventually I even add different development modes I would swap in, so that added another degree of freedom, e.g.:


    make staging-widget

    make release-watch-app


    Luckily for Runcasts, I don't need nearly this level of flexibility since I'm only targeting an audio app. To implement different release environments like testing, staging, and production, I programmatically generate a strings XML resource and then reference that in the code using Ui.loadResource(). Then I use a JSON file as the source of truth, an XML template to inject the values from JSON, a bash script to perform the interpolation using sed, and Makefile targets to hide the (slight bit of) messiness. So far that pattern has worked really well, and it's much, much less code.


    If anyones curious what the monkeyC metalanguage parse looked like, here's the code:

    #!/usr/bin/env python
    """
    Perform preprocessing on MonkeyC files.
    
    For example allows you to write code like:
    
        //#if DRY_RUN
        //%    Logging.log("Would have done foo");
        //#else
            doFoo();
        //#endif
    
    If -t DRY_RUN is passed to the preprocessor, then the `Logging.log` line will
    be uncommented and the `doFoo` line will be commented out.
    
    Usage:
        preprocessor -t <tag1> -t <tag2>
    
    
    NOTE:
        This is experimental, it could absolutely destroy all of your work!
        Make sure you have your code checked-in/backed-up before running this on
        it!
    """
    import argparse
    import collections
    import fnmatch
    import os
    import subprocess
    import sys
    
    COMMENT_MARKER = '//%'
    COMMAND_MARKER = '//#'
    
    VERBOSE = False
    STATS = collections.Counter()
    
    
    def log(msg):
        if VERBOSE:
            print >> sys.stderr, msg
    
    
    
    class TagException(Exception):
        def __init__(self, tag, line_number):
            self.tag = tag
            self.line_number = line_number
    
    
    class IfWithoutEnd(TagException):
        pass
    
    
    class ElseWithoutIf(TagException):
        pass
    
    
    class ElifWithoutIf(TagException):
        pass
    
    
    class EndIfWithoutIf(TagException):
        pass
    
    
    class NestedIfsNotAllowed(TagException):
        pass
    
    
    class NestedElsesNotAllowed(TagException):
        pass
    
    
    class PathNotFound(Exception):
        def __init__(self, path):
            self.path = path
    
    
    class UnknownTransform(TagException):
        def __init__(self, transform, line_number):
            super(UnknownTransform, self).__init__(None, line_number)
            self.transform = transform
    
    
    class UnknownCommand(TagException):
        def __init__(self, command, line_number):
            super(UnknownCommand, self).__init__(None, line_number)
            self.command = command
    
    
    def diff(file1, file2):
        try:
            subprocess.check_output(['which', 'colordiff'])
        except subprocess.CalledProcessError:
            # No colordiff available
            subprocess.call(['diff', '-u', file1, file2])
        else:
            proc = subprocess.Popen(['diff', '-u', file1, file2],
                                    stdout=subprocess.PIPE)
            subprocess.call(['colordiff'], stdin=proc.stdout)
            proc.wait()
    
    
    def comment_line(tag, line_number, line):
        touched = False
        if not line.startswith(COMMENT_MARKER):
            line = COMMENT_MARKER + line
            touched = True
        return line, touched
    
    
    def uncomment_line(tag, line_number, line):
        touched = False
        if line.startswith(COMMENT_MARKER):
            line = line[len(COMMENT_MARKER):]
            touched = True
        return line, touched
    
    
    def transform_file(infile_path, dry_run=False, tags=None):
        tags = set(tags or [])
        cur_cmd = None
        cur_tag = None
        aux_tags = set()
        matches = set()
        begin_line_number = None
    
        if not os.path.exists(infile_path):
            raise PathNotFound(infile_path)
    
        outfile_path = infile_path + '.tmp'
        outfile = open(outfile_path, 'w')
        try:
            with open(infile_path) as infile:
                for line_number, line in enumerate(infile, start=1):
                    if line.startswith(COMMAND_MARKER):
                        parts = line.replace(COMMAND_MARKER, '').strip().split()
                        cmd = parts[0]
                        if cmd == 'if':
                            if cur_cmd == 'if':
                                raise NestedIfsNotAllowed(cur_tag, line_number)
                            tag = parts[1]
                            log( "{}:{}: if '{}' found".format(
                                    infile_path, line_number, tag))
                            cur_cmd = cmd
                            cur_tag = tag
                            begin_line_number = line_number
                        elif cmd == 'else':
                            if cur_cmd == 'else':
                                raise NestedElsesNotAllowed(cur_tag, line_number)
                            elif cur_cmd not in ('if', 'elif'):
                                raise ElseWithoutIf(cur_tag, line_number)
                            log( "{}:{}: else '{}' found".format(
                                    infile_path, line_number, cur_tag))
                            cur_cmd = cmd
                            begin_line_number = line_number
                        elif cmd == 'elif':
                            if cur_cmd not in ('if', 'elif'):
                                raise ElifWithoutIf(cur_tag, line_number)
                            tag = parts[1]
                            log( "{}:{}: elif '{}' found".format(
                                    infile_path, line_number, tag))
                            aux_tags.add(cur_tag)
                            cur_cmd = cmd
                            cur_tag = tag
                            begin_line_number = line_number
                        elif cmd == 'endif':
                            log( "{}:{}: endif '{}' found".format(
                                    infile_path, line_number, cur_tag))
                            if cur_cmd not in ('if', 'else'):
                                raise EndIfWithoutIf(tag, line_number)
                            cur_cmd = None
                            cur_tag = None
                            aux_tags = set()
                            matches = set()
                            begin_line_number = None
                        else:
                            raise UnknownCommand(cmd, line_number)
                    elif cur_cmd == 'if':
                        if cur_tag in tags:
                            transform_func = uncomment_line
                            matches.add(cur_tag)
                        else:
                            transform_func = comment_line
                        line, touched = transform_func(tag, line_number, line)
                        if touched:
                            STATS[infile_path] += 1
                    elif cur_cmd == 'elif':
                        if matches and cur_tag not in matches:
                            transform_func = comment_line
                        elif cur_tag in tags:
                            transform_func = uncomment_line
                            matches.add(cur_tag)
                        else:
                            transform_func = comment_line
                        line, touched = transform_func(tag, line_number, line)
                        if touched:
                            STATS[infile_path] += 1
                    elif cur_cmd == 'else':
                        if (aux_tags | {cur_tag}) & tags:
                            transform_func = comment_line
                        else:
                            transform_func = uncomment_line
                        line, touched = transform_func(tag, line_number, line)
                        if touched:
                            STATS[infile_path] += 1
    
                    # No matter what, write the output line
                    outfile.write(line)
        except:
            # Don't leave *.tmp files laying around
            outfile.close()
            os.unlink(outfile_path)
            raise
        else:
            outfile.close()
    
        if cur_cmd:
            os.unlink(outfile_path)
            if cur_cmd == 'if':
                raise IfWithoutEnd(cur_tag, begin_line_number)
            else:
                assert False
    
        if VERBOSE:
            diff(infile_path, outfile_path)
        if dry_run:
            os.unlink(outfile_path)
        else:
            os.rename(outfile_path, infile_path)
    
    
    def do_transform_file(args, infile_path):
        try:
            transform_file(infile_path, dry_run=args.dry_run, tags=args.tags)
        except ElifWithoutIf as e:
            sys.exit("error: {}:{}: elif '{}' without beginning if".format(
                infile_path, e.line_number, e.tag))
        except ElseWithoutIf as e:
            sys.exit("error: {}:{}: else '{}' without beginning if".format(
                infile_path, e.line_number, e.tag))
        except EndIfWithoutIf as e:
            sys.exit("error: {}:{}: endif '{}' without beginning if".format(
                infile_path, e.line_number, e.tag))
        except IfWithoutEnd as e:
            sys.exit("error: {}:{}: if '{}' without accompanying endif".format(
                infile_path, e.line_number, e.tag))
        except NestedIfsNotAllowed as e:
            sys.exit("error: {}:{}: tag '{}' already active, nested-ifs not"
                     " allowed".format(infile_path, e.line_number, e.tag))
        except NestedElsesNotAllowed as e:
            sys.exit("error: {}:{}: tag '{}' already active, nested-elses not"
                     " allowed".format(infile_path, e.line_number, e.tag))
        except UnknownTransform as e:
            sys.exit("error: {}:{}: unknown transform '{}'".format(
                infile_path, e.line_number, e.transform))
        except UnknownCommand as e:
            sys.exit("error: {}:{}: unknown command '{}'".format(
                infile_path, e.line_number, e.command))
        except PathNotFound as e:
            sys.exit("error: '{}' not found".format(e.path))
    
    
    def do_collect_tag_from_file(args, infile_path, collected_tags):
        with open(infile_path) as infile:
            for line in infile:
                if line.startswith(COMMAND_MARKER):
                    parts = line.replace(COMMAND_MARKER, '').strip().split()
                    cmd = parts[0]
                    if cmd == 'if':
                        tag = parts[1]
                        collected_tags.add(tag)
    
    
    def do_yield_files(args):
        for path in args.paths:
            if not os.path.exists(path):
                sys.exit("error: '{}' not found".format(path))
            if os.path.isdir(path):
                for root, dirs, files in os.walk(path):
                    for filename in files:
                        filepath = os.path.join(root, filename)
                        if args.globs:
                            for glob in args.globs:
                                if fnmatch.fnmatch(filepath, glob):
                                    break
                            else:
                                continue
                        yield filepath
            elif os.path.isfile(path):
                yield path
            else:
                sys.exit("error: what kind of file is '{}'?".format(path))
    
    
    def main():
        global VERBOSE
        parser = argparse.ArgumentParser(
                description="Perform preprocessing on source files")
        parser.add_argument('paths', nargs='+',
                            help='Files or directories to operate on')
        parser.add_argument('--dry-run', action='store_true',
                            help="Don't write results")
        parser.add_argument('-g', '--glob',
                            dest='globs',
                            default=['*.mc'],
                            action='append',
                            help="Filename filter (default: *.mc)")
        parser.add_argument('--show-tags', action='store_true',
                            help='Just show available tags')
        parser.add_argument('-t', '--tag', dest='tags', action='append',
                            default=[],
                            help='Tags to operate on (default: all tags)')
        parser.add_argument('--verbose', action='store_true', help='Verbose mode')
    
        args = parser.parse_args()
    
        VERBOSE = args.verbose
    
        if args.show_tags:
            # Show tags
            collected_tags = set()
            for path in do_yield_files(args):
                do_collect_tag_from_file(args, path, collected_tags)
            for tag in sorted(collected_tags):
                print tag
        else:
            # Transform
            for path in do_yield_files(args):
                do_transform_file(args, path)
    
            if args.verbose:
                print
    
            print "Filename".ljust(78 - 15 - 1), 'Lines Touched'.rjust(15)
            print "=" * 78
            total = 0
            for filename, strip_count in STATS.iteritems():
                total += strip_count
                print filename.ljust(78 - 15 - 1), str(strip_count).rjust(15)
            print "-" * 78
            print 'Total'.ljust(78 - 15 - 1), str(total).rjust(15)
    
            if args.dry_run:
                print "Dry-run complete"
    
    
    
    if __name__ == '__main__':
        main()
    
  • Thanks for the input. I am probably going to put something together that lies in between what you two are doing.

  • I know that this is an old, but I want to share my solution to this problem. I searched for a solution that would be easy to setup, would support automatic preprocessing when files are changed and work with any editor (Eclipse, VS Code) in any OS. None of the solutions that I found met my requirements, so I made an open source tool called directive-preprocessor, which is written in Node.js and used in one of my data fields. These are the steps that I took for the data field application:
    - Applied the directives to the files
    - Move files which needed preprocessing into folder "source-preprocess"
    - Created "source-generated" folder where all preprocessed files would be placed
    - In the jungle file set the "base.sourcePath" to explicitly to omit the "source-preprocess" folder
    - Installed "directive-preprocessor" tool and created the configuration file (preprocess.config.json)
    - Created "package.json" file, so that I can just write "npm start" in the terminal to start the tool

    You can check the result of the above steps here: https://github.com/maca88/SmartBikeLights/tree/master/Source/SmartBikeLights

    At the end I was able to gain around 700 bytes of memory by removing "has" usages, inlining functions, removing shared classes and also able to reduce the code by 150 lines as I used separate functions, one for high memory and one for low memory. I hope that someday Garmin will add directives support, as I think they are a key component when trying to squeeze as much features as possible for low memory devices.