Category Archives: Development

Apex Validations for Tabular Reports (4.1 or any version)

There is a lot of misdirection out there in Googleland about Apex validations and how they are much nicer since version 4.1 yadda yadda.  For Tabular Reports ignore this advice.  After half a day of searching my conclusion is when dealing with validations for Tabular Reports you should stick with the old way of iterating through all rows in your report.

After being redirected a couple of times to the examples I finally looked them up.

So to skip to the meat of the advice – for tabular reports you still need to identify your report column and validate the whole report using the old fashioned way.  So step by step this means creating a validation:

1. Page Processing -> Validations -> Create

and then:

2. Page Level Validation -> PL/SQL -> PL/SQL Expression

and then create  body similar to this:

DECLARE
   l_error   VARCHAR2 (4000);
BEGIN
   FOR i IN 1 .. apex_application.g_f03.COUNT
   LOOP
      IF     LENGTH(NVL (apex_application.g_f03 (i), ”)) > 4
      THEN
         l_error :=
               l_error
            || ‘</br>’
            || ‘Row ‘
            || i
            || ‘:  This value ‘
            || apex_application.g_f03 (i)
            || ‘  can only be of length four’;
      END IF;
   END LOOP;

   RETURN LTRIM (l_error, ‘</br>’);
END;

Ensure that enboldened column number above points to the column you want to validate and obviously modify your logic accordingly.  This solution plays nicely with MRU in Apex 4.1 and as I say, after a few wasted hours, I’ll be using this page again in the near future I think!

Continuous Delivery: Integrating TFS on Linux

We’ve got a thing going on at work about providing Continuous Delivery and in that vein we’re integrating our TFS implementation with a backend nightly build driven through our Linux infrastructure.  Unlike a lot of software projects, we don’t actually produce much software – we’re more about data warehouses, ETL and business rules than we are Java/C++ builds – so it’s mainly Oracle, Informatica, Aptitude and scripts that are going to be running through our build and test suite.

First step in this journey is however to integrate nightly builds of our latest TFS snapshots.  To do this we aim to pull down the code and package automatically on Linux – this being our target rollout platform.  Now Microsoft do indeed provide a tool for this – the Team Explorer Everywhere pack – and it’s a relatively simple manner to get this installed and configured.  Annoyingly though it doesn’t appear to support Linux in a very friendly way in the command line ‘tf’ tool.  There appears to be a lack of environment variable support, most of the examples are Windows oriented and the terminology is a little bit sparse.  This post is an effort to document the salient points.

I have done the following to get the TEE tf client working on RHEL 5.8:

1. Install the TEE client under my own local home directory

2. Configure java (re-using the one bundled with Oracle 11.2 seems fine) into command line with JAVA_HOME and PATH set.

3. Test connectivity to TFS with the following command:

tf dir $/toplevel -server:http://tfs.yourcompanycom:8080/tfs/yourdept -login:WINDOMAINWINUSER,WINPASSWORD

4. Once you have the above working you can create your workspace according to the details in this post.  Essentially this means, making a workspace thus:

tf workspace -new -collection:http://tfs.yourcompany.com:8080/tfs/yourdept -permission:Private

5. And then linking that workspace to a local folder on Linux (it must exist already and would advice linking to the top level in the first instance):

tf workfold -map “$/” “/home/bownr/tfs_working_folder” -workspace:workspacename

5. You can then link further subdirectories as you need and perform a get as required in them (using -force if you want to force an overwrite in your local copy):

$ tf get . -force

Once you get to this point you may think you’re done.  I did.  However there might be some next steps for those sites with any kind of normal security policy.  TEE seems wants to use plain text authentication for passwords (and also liberally sprinkles the passwords and auth details in log files).  So you may want to read this read to get some ideas on Kerberos authentication or other ways to fly.

Making Progress

It’s hard to believe that it’s almost two months since I started my toilet project.  Yes I know.   However things are still moving in the right direction albeit a little slowly.  Most of the delay has been introduced – not because of a lack of time to do things but because of a lack of decision on some of the essentials and of course things take a lot longer than you imagine they will.  Flooring and tiling have to be chosen and ordered and it’s best to install both of these before hanging the toilet itself.  One can tile around the toilet afterwards but the effect won’t be as clean and I’ve no intention to cut corners on this job.

At least now the old toilet is out and the new frame is in.  Here is a work in progress from a few weeks ago.

Toilet frame

The Geberit corner frame is really simple to fit – four bolts hold it to the walls and floor.  The waste pipe is easily cut and snapped into place but one thing I overlooked was how the plasterboard would then attach to the frames.  Probably I was missing the Geberit rail system for this however as things turned out I had enough purchase with the existing insulation joists to make a solid enough fit for these thick panels.  As you can see though they are now on, the sink is in place, the walls are insulated and plasterboarded completely.

P1040329

Underfloor heating has been considered, discarded and considered again but it seems overkill so we’re erring on the side of an electric radiator for the room.  Once the floor is insulated (with the plywood subfloor you can see below) and the vinyl tiles we’ve selected glued down it should certainly be warm enough underfoot.

The next step will be to get some professionals in to plaster and tile the room.  The finish is important and I don’t want to mess up the finishing touches.  I have tiled before, and I am tempted to do it (and still may yet be) but like I say, the finish is important.

Then once we have a radiator I also have to fix up or replace the window in the room and also replace the glass above and in the door, then we can paint and we’re done.  Hopefully by the spring time..

I promised something to follow up on my post about one-man projects.  And I have a lot of thoughts on this stemming from both software and toilets, however, this post is going to have to wait a little longer.

Happy New Year to all those handy, software people out there.

Golden Gameplay

I love this thread on Slashdot which pretty much sums up my thoughts on games and gaming in general.  The thread is inspired by this article which takes a view on the current state of the nation in gaming.  We all know what a huge business gaming has become in all aspects: in browser, on mobile, on console, on computer, on handheld.  The argument has always been that the more money comes into the business the more that gameplay and game longevity suffers.  This argument has been around as long as video games have.

My take is that big games companies are lazy, risk averse and profit-driven and that independents are so confused by the panoply of platforms they can target that they become paralysed by anxiety.  Additionally the third-party tools that aim to ease the confusion are the only true winners at the moment.  Either you need to be a focussed independent or a major label looking to try something new.

Now, this doesn’t always work.  Remember Mirror’s Edge?  And check out The Unfinished Swan.  Great looking games with great feel don’t always succeed at the box office.  Innovativation hardly ever assures success, gameplay is golden, and gold as we know is impossible to synthesise.  To make a great game today you must combine graphics, environment, interaction – find a balance – sprinkle on that fairy dust.

Delivering the One-Man Project

In a recent email conversation I touched on the ability, or more often inability, of a one-man software project to deliver regular controlled packets of code.  Or to put it another way the the ability for many one-man software projects to deliver very little or nothing at all despite setting out with the best intentions and a following wind.  At the heart of the problem lies the following rather messy aphorism:

You don’t get concencus in a one-person software project.

Concencus by definition is a group activity.  A one-person project or software product can have concencus but this is only arrived at by interaction with others, usually your users, and if you don’t have a delivered product or a project then you don’t have users.  Concencus is vital to give you the confidence to deliver what you’ve been developing.

So the question turns to how can we form a concencus on project decisions when there is no code to attract users?  What we’re seeing increasingly is the use of crowd funding to drive software development.  Crowd funding is proving to be very useful for software developers and not just from a financial point of view.  More importantly crowd funding can help drive direction, delivery and scope and also help test the choppy waters of marketing by trying out ideas on an interested and motivated audience.  This works better for established companies or old-timers than it does from unproven individuals with new ideas simply because old-timers already have a track record and an audience.  So let’s back up a bit further.

A Kickstarter project commits you to a direction – a product usually – and an intention.  You scope is inherently limited by the promises you make, the videos you release, the decisions you’ve already made when you’re coming up with your Kickstart concept.  It’s all about selling a promise.

A one-person (software) project is usually driven to scratch a particular itch, and its scope may not necessarily include getting to a point where you can sell something.  You might want to make an open-source framework, you might want to come up with a vaccine, who knows?  What you do know if that you as a one-person team have limited time to accomplish this and you want to make incremental progress.

How can you deliver progress?  Well, I’m going to tell you what I do.  I do it by giving myself more than one thing to do in my spare time.  Instead of deciding to deliver one project, I will deliver three or four or five projects simultaneously.  Each will have its own technology associated with it that is slightly different, each will offer something slightly different that allows me to enjoy working on it for a different reason.  It’s me after all that I’m pleasing here.  I’m also out to help others and provide something they might like and find useful but I’m doing this, for me.

The premise may sound slightly insane – why should you make your life more complicated when you want to focus all your spare time on doing one project really well?  In the next few weeks I’ll take you through how I do this and what techniques I use to keep myself interested and focussed.  Above all I want to show you that you don’t need to despair when you look at the calendar and realise that the code is still not out.

Visual Studio 2012 and WOWZAPP

I saw the Microsoft WOWZAPP 2012 hackathon event as a real opportunity for Microsoft to engage with developers at grass roots level and convince them that they should spend their time creating apps in VS2012 for the Microsoft Store.  Therefore it was with some excitement that I arrived the other day at my local event.  The coffee and cookies were flowing and the day had a quiet if measured start.  I had been told to come along with my laptop loaded with Windows 8, VS2012 and the examples SDK.  I did this and made sure I didn’t peek at any of the developer resources ahead of the day in order to give myself a one-shot completely immersive experience.  This is it Microsoft – show me what you’ve got.

The WOWZAPP event has its very own W8 app and this can be downloaded and installed via Powershell as it’s not available (slightly surprisingly) through the App Store itself.  The app package then provides an icon on the desktop and some links to resources.  The choice was then up to you how you used these resources to build an app of some kind.  I wanted to write a little HTML5 game so I quickly found the relevant resource and started building.  The resource was fine if hastily assembled and provided good detail and a nice walkthrough of the features of VS2012.  So far so good and within an hour or so I had a working game and could start to play around with.  However I also wanted to see what the examples SDK could show me so I started investigating the C# and C++ samples with a view to getting perspective on other dev techniques.

This is where the WOWZAPP experience starts to get a little thin.  I went back to the WOWZAPP app to try and find some more info and something else to look at in more detail and quickly realise that that is pretty much it.  There is no depth here – a couple of links to marketing and HOWTO websites and you’re on your own for the rest so I resorted to Googling around for tips and tricks for example on how to get my XNA game built and running inside VS2012.  This is something I would normally be doing at home or in the office but somehow I expected something more to be unveiled at the event.  I expected the scales to fall from my eyes and the path to app nirvana to be shown.  I hoped for some tutorials, some walkthroughs, even some linked branded articles from MSDN say that provided clear declarations of expected modes of behaviour and best practice in development for a variety of ‘target apps’.  Yes there was some information on style but more basic information – like for example what VS2012 technology should I pick for which app type might also be good?

As an experienced developer I’m confused and from seeing the selection of apps created on (or prior to) the day I’d say that I’m not the only one.  Simple case solutions work but in VS2012 there is a confusion of technologies crammed under one roof – C# and C++ sit uneasily side by side.  HTML5/Javascript seem a different world.   I’m still not sure how Blend and the XAML world can fit together with the rest.  At times I feel I’ll want to use all of these technologies but as to how they can work together and how this limits my ability to deploy it to a target environment.

 The only way, like so many times before with so many technologies, will be just to try it all out and see what happens.  As soon as I work out the best development model for the app you want to write I’ll be sure to share it.

Microsoft: Putting the User at the Centre

I’ve caught up with some of the Microsoft Build developer conference videos now.  Between the non-jokes about the weather in Seattle being bad but not as bad as it is back East and giving away oodles of hardware to attendees there were some pretty interesting things to come out last week’s meeting.  It was of course an opportunity for Microsoft to show off Window 8 and Windows 8 Phone along with some of the hardware that is available shortly.  A lot has been made of the Microsoft Surface (and touch interfaces in general in Windows 8) and of course WP8 but I think if we can take away something more fundamental about Microsoft from the themes running through this event.

At Build 2012, developers were told they should be excited about Windows 8 and Windows 8 Phone.  Since watching the videos I’ve come away with the idea that developers had better be excited because I believe this generation of software potentially changes everything for the user.  Indeed I’m pretty blown away by the joined up thinking going on in Microsoft land.  What Apple have been hinting at, Microsoft have just said “Screw it, let’s do it”.  Xbox gaming integration, Xbox music (which I love) even Live Tiles are all starting to make sense.  Suddenly I feel like my desktop PC is like an Xbox or like my PS3 – it is a game console with other capabilities.  Except now the sometimes clunky way that games consoles deal with personalisation and network integration is somewhat, hopefully, relegated to the past.

In existing mobile devices we have many apps and platforms providing some form of integration with other services – some of it ok (say iOS Mail for example) and some pretty terrible (too many to mention) – but what is clear is that there has been a lack of systemic thinking by software architects on what constitutes providing a service to the user rather than as a service to another piece of software.  A lack of clear thinking on user bound services has also been compounded by a tentativeness to execute on a totally user-oriented experience.  Microsoft have shortened the pipe between data and user giving less wiggle room, less API elasticity.  This can only be a good thing.  For example I have an iPhone but I’ve not had the need to get involved with iCloud as I don’t have a Mac.  I have Windows iTunes but I still need to plug my iPhone into it to upload my existing music (if I want to avoid buying it again from the Apple store).  Microsoft through Windows 8 is trying to change the linear approach to device management.  Windows 8 wants to better integrate devices that I already own rather than necessarily forcing me to buy new hardware to do cool stuff.  In fact this is where the underlying marketing strategies between the two companies perhaps differ – Apple say you’d better buy the hardware if you want our software to continue to work, Microsoft say you can have some great software which will work on your existing hardware and you can also buy new hardware later if you find it useful.

With Windows 8 and Windows 8 Phone the ability to share my account across devices is implicit.  As Microsoft themselves say – the user is at the heart of the experience – it’s all about personalisation on all your devices but doing this consistently.  While this sounds a lot like they are just paying lip-service to what everyone else has been doing badly already it seems however like they are actually trying to do this thing properly.  The cloud integration is seamless across devices and somewhat surprising – for example I notice I get my file view preferences taken across between W8 PCs without having to specify them on each device.  Of course apps-wise Windows 8 and of course Windows 8 Phone lag far behind and much has been made of the numbers in the various app stores.  What should not be underestimated is the amount of software already there in Windows 8 already doing most of the things you need.  Indeed until you link up your Facebook and your Skype accounts you don’t really understand what Live Tiles are about – but then suddenly you see that the experience is personal, and it’s personal across all devices and I can seriously consider logging into my desktop or my laptop in the same house and now having to worry about having all the documents I need being to hand (if I take advantage of Sky Drive).  I imagine I’ll be thinking twice before renewing my LiveDrive account next time around.

Slow erosion of boundaries between apps and services has been going on for a while but what Microsoft has done has said – ok you want a properly personalised experience on every device?  You can have it.  And not just through Microsoft services – you can have everything in one place and we won’t stop you integrating so the ability to bring Twitter, Facebook, LinkedIn and undoubtedly many others right on to your new look desktop directly without installing anything from a third party.  You don’t need to worry about putting a pretty picture on your desktop because the pictures are everywhere.

Of course one question that pops up is how this level of integration will work when it comes to security.  The paranoia exhibited by all presenters at Build when they were saying “I’d better lock this as it’s my actual device” is probably as much a testament to how much of their lives are on that device (or at least the services to which it interfaces) as it is to corporate sensitivity.  If all of your accounts are at the mercy of your single Windows Live sign-on then you’d better be very careful with whatever devices have access to it – longer term this could have serious implications for security officers everywhere.

Many other questions still remain of course not least over the newly released Windows Phone 8 SDK.  A requirement on using 64 bit hardware only for developing is a shame and there is a lack of clarity on the purpose of the bundled XNA 4.0 among others.  What is clear however is that with Windows 8 and Windows Phone 8 Microsoft have delivered a big bundle of software to play around and have fun.  The results from devs will come in over the next few months to years.  As an enticement there is a stack of engaging hardware to play around with this stuff on.  Putting the user at the centre of the experience is something that only developers can do – Microsoft can only go so far with their intentions – and as the phrase has it, the market ultimately decides.  However I know I can’t wait to get my hands on some proper Windows 8 and Windows 8 Phone hardware to see, and also to try and deliver, the fully integrated experience that people have been waiting for probably since the invention of the Smartphone.  Maybe it won’t be this iteration but by betting this big, Microsoft are getting mighty close.

Nowadays Everyone is a Developer

I just discovered the Developer Tools in IE8. WHAT THE?! When did things get so complicated? I suppose when browsers got so capable – however goes some way to explaining why people treat the browser like systems in their own right now. And IE8 and I should fancy also Firefox and everything else has a console I can try stuff out on. How weird – I mean for someone who has not had call to write a website by hand in a long time. No wonder web “developers” can exist purely in web land and still make a lot of noise – web land seems to have all the resources to keep people verybusy and make something like a 20k piece of wondercode seem cool enough to make people’s lives a lot easier. Still, all that piece of code appears to be is essentially a vimrc – a rewrite function that subverts the way the engine works. So what is the browser now – a kernel, an engine, a database? Are developers just DBAs, tweaking and poking the engine to do it what they want it to? And does anybody care about all this cleverness if they’re not actually another web developer? I mean we hear about jQuery and HTML5 but the world is still turning the same way and web pages are as shonky and slow and unreliable as they always were. So what gives with me and what gives with the world of the web?

But I can start to feel a fundamental change in my assumptions. Rather than the browser being the add on – the browser is at the centre and everything else floats around that. But Google tried that and it plainly didn’t work for some very good reasons that Berners-Lee said on a mailing list not so long ago which I would only misquote if I could find it. And so I digress into a Friday reverie of simpler times when you could understand what was going on and not everyone thought they were a developer.

Except of course back then everyone told you to read Zen and the Art of the Internet and The Internet Worm and consider those as even more simple times and looked down their noses at poncy C++ developers with their modern ideas.

Another Rabbit Hole

The last few days have involved all sorts of user interface ridiculousness.  WPF, .NET, Java, Swing and Eclipse and then back to .NET and then on to C++ and then back to .NET again.  It’s nice getting on top, underneath, beyond, behind and inside of things sometimes and then it’s also nice to hear something like this announcement and just think – Ok perhaps this is where I should be going now.  Stop running around like a headless Christmas Chicken.

MonoGame is doing what Unity and GameMaker Studio and and Xamarin and all those other platforms are doing – they’re allowing you to target multiple systems (iOS, Android, WP7, XBox etc) with one codebase utilising a clever framework.  What MonoGame does differently though is that it builds on something that is already there and established (i.e. XNA) and it targets all those platforms from pretty much the same codebase (they say) and it’s open source and it intends to stay that way.  The others are going to charge you money to do this – a lot of money.   The paid-for ones have fancier tools but they all essentially do the same thing.

So I now have something new to look at.  After a year of XNA fiddling with Friendlier and having made progress with Android and OpenGL development I feel I can have an objective look at something like MonoGame without feeling that I’m taking the easy way out.  I’m already writing a framework, I should understand their framework.  And this might be a good opportunity for a little decluttering of the projects that have built up over the last year…

Platform Transform

I’ve been burying my head in Brazil – the framework – over the last few weeks.

The first step was taking Friendlier and making it work as an API. Cutting vast blocks of functionality (which forshame resided mainly in one or two huge files) into firstly addressable chunks and then to plump out this component based API with blocks that would equate to finer grained control with subsequent apps.

To whit – I parcelled up the old stuff and kept it working while creating new things that had a cleaner and therefore more reusable interface. The output of this phase of work was a half completed game called Paulo. This is what Christmas Chickens is turning into. A 3D XNA game written using Brazil.

So once I had satisfied myself that the framework was usable I set about moving it immediately on to another platform. Why not finish the game? Well my goals are bigger than ‘just’ writing games and also not limited to one platform. I could just write a game in XNA if I wanted to target that platform. I wanted to see if this framework would translate to another platform – and I didn’t want to wait any longer to test out the theory.

Next stop – Eclipse, Java, Android OpenGLES. Big change from the (yes) cosseted world of Visual Studio and XNA 4.0.

Select java version, install java SDK, get Eclipse (or another IDE), download the Android SDK and virtual device (AVD) manager, then select and download target Android SDKs, integrate the Android and Mercurial plugins for my IDE and fire up a new Android project.

Then on to the OpenGL integration. Subclass the Renderer object and port my existing C# framework into Java. This bit is relatively straightforward as C# and Java share a great deal of syntactic similarities. The larger challenge is working out how much abstraction to keep in Brazil for Java. OpenGL is a lower level API than XNA is therefore the choice is to bind directly with it or introduce some XNA like helper components such as Vector3s and BoundingBoxes. However, so far so good. I have a working Java framework and a working application and I’m able to fire up a virtual Android device and see the app working.

Next steps will be Paulo for Android and then I tackle the next platform. I’m documenting this transformation here and also over on the xyglo site where you can see code examples.