Tuesday, September 20, 2016

Tasks that are keeping me busy as a professor this semester

Here are the tasks that are keeping me busy this semester (Sept-Dec 2016).

I am writing this  so that when people ask me if I wouldn't mind helping them out, I have somewhere to point them to when I say either 'sorry, no', or 'I will try to fit it in, but please don't expect fast response'. Members of the public often don't understand the heavy workload of a professor. In the following, I am sure I have forgotten some stuff. I probably shouldn't have 'wasted' time making this list, but I have done it to reduce my stress/guilt levels when I have to say 'no' or 'slow'!

Teaching and research: (It is not always easy to separate these areas, since graduate teaching and undergraduate project supervision blends into research)

  • Supervising ten 4th year University of Ottawa capstone software engineering projects. (each team has 2-5 students) Meetings many weeks with cohorts (sets of groups) to discuss progress. Meeting with individual groups as needed. Meetings with individual students when there are issues. Liaising with the 'customers' of the students. Constant monitoring of Github pages to ensure there is progress.
  • Preparing for the next cohort of capstone projects by helping find projects.
  • Co-mentoring, along with one of my PhD students, 4 undergraduates from other universities working on Umple as part of their 4th year capstone project through UCOSP. Meetings every week, along with time spent finding issues for them to work on, discussing design options,  reviewing design and code, as well as giving/reviewing formal feedback to them
  • Supervising a student in a directed studies course that is related to my research
  • Supervising 7 PhD students, 5 of whom are in the final thesis-writing stage (topics relate to Umple, user interface evaluation using machine learning and vision, and enterprise architecture). Includes finding and liaising with committees, guiding research, discussing research and design options, editing papers and theses, and so on. I meet almost every week with each student.
  • Supervising one masters student, in the thesis-writing stage.
  • Supervising/assisting 3 postdocs/visiting researchers (on topics of reverse engineering, software engineering education and deep learning for robotics)
  • Sitting on the committees of various students supervised by other professors (includes reading theses, preparing comprehensive exams,  etc.)
  • Travelling to present several papers that have been accepted at conferences (expected 2 weeks of travel this semester). This also includes attending sessions at these conferences, networking, and so on. This semester I am going to Models, Isola and hopefully Cascon.
  • Travelling to a meeting of a research consortium I am part of.
  • Planning travel and filling out paperwork before and after travel (sometimes it seems as though doing the paperwork can take as long as the travel).
  • Working on at least 7 scientific papers at various stages of preparation for journals and conferences, related to the above. Most papers involve collaboration of multiple grad students and/or external colleagues.
  • Investigating and working on one or more grant proposals
  • Responding to almost daily requests from potential future graduate students. These days I am saying 'no' until some of my existing students graduate, to lighten my load, and until I have new sources of funding.
  • Writing letters of reference for many former undergraduate and graduate students.
  • Actually conducting some of the research! This includes doing a certain amount of active work on Umple (e.g. fixing an issue or two) in order to maintain my personal software engineering skills
  • Responding to other researchers' requests about my research. I receive inquiries for help, requests for papers, and have to deal sometimes with people who make mistakes when writing about my research ... and I need to set the record straight.
  • Organizing meetings of my research group
  • Managing research infrastructure (servers etc.)
  • Keeping track of my research finances including setting up contracts for those graduate students that I pay. The finance system is quite hard to work with; I have to manage my own spreadsheets so I can be 'forward-looking' and reconcile these with the 'backward looking' university accounting system.
  • Filling out paperwork required by granting agencies regarding the progress of each research project.
  • Keeping up to date by reading literature, researching the latest software engineering techniques, etc.
  • Preparing for my graduate course in Software Usability to be taught next semester.
  • Applying for 'Ethics approval' for certain kinds of research, and reporting on ongoing projects. The forms are extremely complex, so this is an unduly time-consuming task.
  • Skimming/reading/replying to large numbers of emails relating to all of the above tasks
  • Keeping up-to-date my membership in the IEEE, ACM, CIPS, PEO etc.
  • Writing blog posts (it seems only about once a year now). Helping to raise public awareness.


Administration (I am Vice-Dean Governance)

  • Attending meetings of Faculty Executive and Faculty Council; helping to prepare agendas, preparing minutes, running special votes, and so on.
  • Attending Senate, Senate Executive, Senate Undergraduate Council (includes reading large volumes of material in preparation for these meetings).
  • Working on negotiations with the TA/RA Union. Multiple meetings most weeks.
  • Doing whatever other research is needed for the above roles, and any tasks assigned by the Dean (I am 'excluded' from the professor union so I can help with personnel tasks).
  • Applying for an academic leave  (sabbatical) next year, including writing a proposal, documenting progress, and so on. Hard deadline at end of September. I am overdue for this. The focus will be on Software Engineering Education.
  • Consulting with professors who are seeking advice (e.g. about tenure and promotion)
  • Assisting in preparation for accreditation at UOttawa (Computer Science and Software Engineering)
  • Participating in the Software Engineering curriculum committee.
  • Sitting on various ad-hoc committees (e.g. a committee on research IT infrastructure)
  • Other minor tasks: e.g. managing citation data for the faculty, liaising with other vice-deans, making active suggestions for improvements in various areas, such as faculty management
  • Skimming/reading/replying to large numbers of emails from the Dean, Executive, and so on
  • Attending certain 'social' events where my attendance is required/desirable (e.g. celebrations of retirements, awards ceremonies, announcements, welcoming of new students and staff, representing the Dean if he is not available)
  • Attending Council of the School of Electrical Engineering and Computer Science
  • Attending convocation ceremonies
  • Attending (if I can fit it in my schedule) certain training activities.


Service (beyond my formal role as Vice-Dean Governance)

  • Peer-reviewing for many journals and conferences (several papers to review formally every month). Publons lists my recent journal reviews, but not conference reviews.
  • Serving on the editorial team of SoSym Journal (finding reviewers, helping the editor in chief make decisions)
  • Serving as CIPS visitor and team lead to the Seoul Accord (I have a report to write that is overdue)
  • Serving as evaluator for tenure and promotion, or grant proposals, for professors from other universities
  • (perhaps) attending accreditation visits at other universities
  • Skimming/reading/replying to large numbers of emails relating to all of the above tasks
  • Responding periodically to requests from journalists for my expertise
  • Periodically helping out student groups (if I have the time)

Wednesday, February 4, 2015

Oil prices will continue on their wild swings for years to come

"Oil prices drop by half in just a few months". A headline from early 2015? No from 2008, as this article attests. Prices had been up in the $140's before that, and between 2008 and mid 2014 they went back up again well above $100.

So in fact, we are currently in the midst of a series of wild multi-year swings in the price of oil. This pattern of wild swings in the price of an important resource as it was being depleted was also experienced in the 1800's in the price of whale oil.

It's really pretty simple: High prices encourage investment in improved extraction (in the case of whales, better boats, better hunting techniques, longer voyages). After an important time lag these investments pay off in terms of increasing supplies; for a while the investment keeps pouring in because there are high prices and lots of product to sell. The resource producers do well, those dependent on the resource suffer from the high prices. 

But then a glut takes hold. Too much resource. In 2008 it was exacerbated by a recession that dropped demand, but whether or not there is a recession, the hyper-investment in supply will cause a glut, even in a dwindling resource. The temporary glut causes prices to start to fall, fast. It takes time to turn off the investment, there is still plenty of supply for a while, pushing the price way down. Some suppliers (with lowest production costs) are not to bothered by the situation as they know it will push other suppliers (with high production costs) out. It works. Suppliers shut down, Investment stalls. OPEC is right now pushing Russia and US out.

Then supply slows just as demand is soaring due to people loving the low prices. Smash ... the price soars again. But for some time there are fewer suppliers, so the price overshoots at the top end. Even now OPEC is predicting $200 oil in the not-too-distant future. There will be a lot fewer US wells producing by then, so OPEC will make a lot more money than they would have if this crash in prices had not happened. The price swoon in fact might jeopardize the dreams of the US being independent of OPEC supplies.

The last thing businesses and the modern economy need is price swings. Business can't plan; investments are too uncertain. But laissez-fair economics will guarantee swings of this nature. That is, unless a new disruptive technology takes hold. In the 1800 crude oil took over from whale oil, whose price eventually dropped off as it was no longer needed.

So there are two possible futures:

  1. Solar + nuclear + fusion + other technologies eventually save the day.
  2. We will be stuck with swinging oil prices. 

My guess, for the next few years is the latter. $200 oil in late 2016 or 2017? Another crash to $50 in the early 2020's; back unto $250 oil shortly afterwards?

Investors who can play long-term markets and can wait out these swings could make fortunes. But most futures plays are only for the short to medium term.

Theoretically governments could intervene in the market to smooth out prices. For example taxes on gasoline and other petroleum prices could be much, much higher when the crude price is low, with funds going into a trust to be saved for the next peak and to support investment in continued supply. When prices peak again, the taxes would drop, supporting a slow and steady rise in real prices paid by consumers and business. Unfortunately conservative thinkers currently in power would never stomach this.

Tuesday, April 15, 2014

Should you change all your passwords due to HeartBleed? I say no!

Huge number of companies and experts are saying 'change all your passwords' or 'change ours'.

Here's my take on it. I say don't blindly change your password on all sites:

A. Change your passwords on individual sites when all four of the following are true:
  • you used password-protected areas of the site between April 1, 2014 and the date the site announced it has patched the bug (or someone logged into using your account)
AND
  • the site reports it was vulnerable or was reported by others as vulnerable. See here for the status of some sites; consider a site vulnerable if you can't find out any information about its vulnerability and are worried because it contains sensitive data.
AND
  • The site contains information that could cause harm if it was exploited, or your password is similar to a password on another site that you would care about.
AND
  • the site does not use two-factor authentication (e.g. sending you a text containing a one time special code when you log in) or similar backup security mechanisms.

B. Also, change your passwords on other sites where:
  • you use the same or similar password to those you had to change in item A (but now try to make the passwords reasonably strong and different -- see my guidelines below).
OR
  • the site stores particularly risky information and recommends a change. This would apply to banks and taxation agencies that were affected, perhaps even if you haven't logged on for a longer time. Note that most banks report they were not affected. 

So I guess most people might end up changing 20% of their key passwords based on the above, but certainly not all of them. Why do I not say 'change all your passwords' to be safe? It is because there is significant risk and this is a classic 'lets overdo it' panic situation:

1. Some sites are just not affected. Many important sites like most banks, Apple, and Microsoft are just not vulnerable. Other sites have secondary mechanisms in place and have determined that users are safe.

2. There may be residual sites that still have the vulnerability; if you use one of these with your new password(s), then you are compromised when you weren't before.

3. The HeartBleed bug works by looking at transmitted data or data nearby where transmitted data is stored; if you or someone you know have not logged on (and your computer has not automatically logged you on) to a vulnerable site there is highly unlikely to have been data accessible to the bug that contains your password.

4. The password reset process itself has risks: Many people actually don't know many of their passwords, and rely on a tool to remember it for them, or have remained logged on essentially forever. In such cases, sites typically send a reset link; if a hacker truly wants to get you they may have ways to intercept that link, or generate fake links anticipating that people are in the middle or resetting their password. Some sites even send the original password back to you unencrypted, which is dreadful.

5. Many people now have hundreds of accounts, and several dozen they use regularly. It is essentially impossible to change all passwords and remember them all, so likely you will end up resetting passwords again in the future, or be forced to write then down or use an easily-guessed pattern. These add extra risk.

For unaffected and low impact sites (i.e. ones not dealing in financial and personal data) the risk of an attack on you is very small. In my Opinion, the risk posed (items 2, 4 and 5 above) by everyone changing their password, when multiplied by the low probability in most cases (items 1 and 3 above) outweighs the benefits of the blanked 'change all of them' advice.

For ongoing security with passwords. Here's what to do as a consumer:
  • Use passwords that are at least 6 characters, are not just letters or numbers; use special characters in passwords if the site allows.
  • For financial institutions, governments and other agencies processing sensitive information use completely distinct passwords from all others.
  • For other sites, make sure there are several characters of difference even if you follow a password pattern.
  • Only change your password based on my guidance at the top of this email, or if you think someone may have a  way to guess your password, or have specific reason to want to hack you.
  • Never click on a link that says to change a password unless you have requested such a link in the last few minutes. In other circumstances, go to the website by typing the URL or using a bookmark you have used before.
Here's what to do as a site administrator or programmer
  • Allow passwords to have any combination of letters, numbers and special characters and be of very long length. Don't restrict password content other than for minimal length, or requiring at least two of the above types of characters. So many people run into sites that have complicated rules (short password, no special characters, etc, that they have to make up a password they will inevitably forget).
  • Implement two-factor authentication if there is a high risk of compromised information.
  • If your site his risky information such as substantial personal or financial data, implement some other forms of extra security, such as challenge questions when a computer at a different IP address range is used, and gradually slowing-down of response as more and more password attempts are entered.
  • Don't block people from using password managers without good cause. Password managers likely result in a net increase of security. 
  • Put in place a robust reset process that uses multiple factors. Force people to phone if some of the factors are not present. Factors might include emailing their stored email address first, without a reset link initially, and verifying some other known personal information first.
  • Allow people to save multiple email addresses, so if people change service provider you still have a way to contact them to verify identity.
  • After a password is changed, email people at their email addresses of record, to alert them that the password has been changed.
  • Never put a link to any password-protected website in any email you sent to people; the only exception might be a link sent in a reset operation that follows the above guidelines, is sent instantly on request, and is only valid for a very short time.
  • Always think about usability as well as security; low usability of a security setup will force people to use simple passwords, write them down, or abandon your site.
Some other sites of interest include this and this. My opinion above contradicts these sites to some extent.

Monday, September 23, 2013

Just because fingerprints can be hacked doesn't make them useless in the iPhone 5S

As this article states, the fingerprint reader of the new iPhone 5S has been hacked by the Chaos Computer Club.

But does that mean Apple is "stupid" as they say, and that fingerprint authentication is unwise?

No, for the following reasons:

  • Right now, many people avoid using passcode locking because it is slow. This method will encourage them to lock their phones because it is faster to unlock them.
  • Passcode locking is almost certainly less secure than hackable-fingerprints due to the possibility of people looking over one's shoulder.
  • The average thief who decides to keep a lost phone they found or mugs someone and runs off with their phone generally won't have time to perform sophisticated fingerprint forging before the owner of the iPhone locks or wipes their device remotely.
  • It improves accessibility for the blind.

The lesson is that we should approach security from several directions. Avoid keeping critical information in plaintext on any computer or phone, protected by just one method. Use two-factor authentication, obfuscation, and passwords/passcodes in addition to fingerprints for such data. Also arrange for remote wiping in advance.

I have other suggestions for Apple (and others thinking of using this technology).

  1. Use geofencing. As an option, allow fingerprint-only access when in the home or other places that the phone recognizes it spends a lot of time; it could 'learn' the users workplace geographic coordinates, but require the passcode when elsewhere.
  2. Allow longer time intervals for passcode-required access. Currently the passcode can be required immediately, or after an interval has passed, with settings p to 15 minutes. The only other alternative is 'no passcode'. However, an interval of half an hour or an hour or even a day could be very useful too, to deter theft, especially in conjunction with geofencing and entry of an Apple ID for changing the passcode.
  3. Keep developing biometrics: Fingerprint recognition combined with facial recognition and/or voice recognition could double the difficulty of hacking. For example, with both fingerprint and facial recognition (both instant) a hacker couldn't just lift a fingerprint without also obtaining a photo of the user. That would require knowing whose phone it is. 

The idea is that someone reluctant to enter their passcode very often might be more willing if it was required only once in a while.


Wednesday, May 29, 2013

My policy on link spam in comments on my blog

More and more often I receive emails to moderate 'link spam', in other words links embedded in a comment on my blog that are primarily or solely intended for 'search engine optimization'.

The comments often say something like 'Great blog, good points'. Sometimes they are actually well-thought-out comments on the material in the post, and are attached to a relevant post. However I do not accept comments with links unless the comment and all the links meet the following criteria:

  1. The text on which the link is placed makes it clear to the reader of the link where it points, e.g. your company, your product, yourself or an informational site.
  2. The comment itself says something relevant, and is not there for the sole purpose of exposing the link.
  3. I believe that the linked page has relevant information (or a product or service) that matches the subject of my blog post and adds value to the post. The information does not have to agree with what I have said; in fact I welcome argument and contradiction.
  4. If the comment mentions a product, service or company then what is being marketed is something that I am not morally opposed to and think readers of the post could potentially benefit from (although I would not ever endorse or even verify products or services in links).
  5. The poster uses a verifiable identity. They must give their email, some other legitimate means of contacting them, or else the linked page or site needs to list a person with this name when searched. I sometimes will contact the person to verify it is them.
  6. The site being linked to is, in my opinion and at the surface level, legitimate and respectable and neither plastered with advertisements nor poorly crafted.

Here's an example: Today there was a comment from an accounting company on my post about solar energy. Upon visiting the company's website it seems the company provides services to help people cost-justify solar installations. Points 3, 4 and 6 seemed to be satisfied, so I would have accepted the link if the other rules had been followed.

Here is the text of the comment, however:

Hi, nice post. Well what can I say is that these is an interesting and very informative topic on solar energy financial management. Thanks for sharing your ideas, its not just entertaining but also gives your reader knowledge. Good blogs style
too, Cheers!
This kind of 'flattery' adds little of relevance. I would have accepted it as friendly encouragement if there were no links, but the presence of a link makes such wording violate rule 2, since there is no additional useful information.

To add something even slightly useful and not break rule 2, the poster could have said, "People buying solar installations may need help doing the needed financial analysis; companies like ours can help with that."

The link was buried under "solar energy financial management".  Since the page linked was not a general page about that topic (e.g. a wikipedia page or some other pure unbiased information site) then rule 1 is being violated. To avoid breaking rule 1, the linker needed to put the link on the name of the company.

Furthermore, the person leaving the comment gave the name of a person, but a search yielded no such person at the company in question, violating rule 5.

I suggest that bloggers in general adopt rules similar to mine.




Friday, May 24, 2013

UML in Practice talk at ICSE: And How Umple Could Help

I just  finished attending the ICSE talk by Marian Petre of Open University, entitled "UML in Practice"

She conducted an excellent interview-based study of 50 software developers in a wide variety of industries and geographical locations. Her key question was, "Do you use UML".

She found that only 15 out of 50 use it in some way, and none use it wholeheartedly.

A total of 11 use it selectively, adapting it as necessary depending on the audience. Of this group use of diagram types was: Class diagrams: 7, sequence diagrams: 6, activity diagrams: 6, state diagrams: 2 and use case diagrams: 1.

Only 3 used it for code generation; these were generally in the context of product lines and embedded software. Such users, however, tended not to use it for early phases of design, only for generation.

One used it in what she called 'retrofit' mode, i.e. "Not unless the client demands it for some reason".

That leaves the 35 software developers who do not use it (70%). Some reported historical use, and some of these did in fact model using their own notation.

The main complaints were that it is unnecessarily complex, lacks and ability to represent the whole system, and has difficulties when it comes to synchronization of artifacts. There were also comments about certain diagram types, such as state machines being only used as an aid to thinking. In general, diagram types were seen as not working well together.

She did comment on the fact that UML is widely taught in educational programs.

My overall response to this paper is, 'bingo'. The paper backs up research results we have previously published, which served as a motivation for the development of Umple.

Features of Umple that are explicitly designed to help improve UML adoption include:
  • Umple can be used to sketch (using UmpleOnline) and the sketch can become the core of high quality generated code later on.
  • It is a simplified subset of UML, combatting the complexity complained about in the Petre's research.
  • It explicitly addresses synchronization of artifacts by merging code and UML in one textual form: UML, expressed textually is just embedded in code, with the ability to generate diagrams 'on the fly', and edit the code by editing either the code or those diagrams.
  • It integrates diagram types: State machines work smoothly with class diagrams, for example.
  • Diagrams like state machines finally become useful in a wide variety of systems, not just embedded systems.
I hope that if Umple can become popular, then in a few years, we could do a study like this and report quite different results.

Scaling up Software Engineering to Ultra-Large Systems: Thoughts on an ICSE Keynote by Linda Northrup


Linda Northrup just gave an interesting talk at ICSE 2013 about ultra-large scale systems (ULS).

My takeaway from this talk are the following points:

  • ULS refers to systems with large volumes of most of the following factors all combined together synergistically to increase complexity: source code in multiple languages and architectures, data, device types and devices, connections, processes, stakeholders, interactions, domains (including policy domains) and emergent behaviors.
  • ULS systems run in a federated manner; they are on all the time, with inevitable failures handled and recovered locally, so as not to effect the system as a whole. The analogy to the functioning of a city (where fires occur every day) was very apt.
  • Build-time and run-time are one-and-the-same: Pieces of a system need to be replaced on the fly, and dynamic updating and reconfiguration needs to be possible.
  • They inevitably involve 'wicked' problems with inconsistent, unknowable requirements that change as a result of their solution.
  • Development can neither be entirely agile (due to the need to co-ordinate some aspects of the system on a vast scale), nor follow traditional 'requirements-first' engineering. On the other hand, parts of a system can be developed in an agile manner.
  • All areas of software engineering and computer science research can be used to help solve issues in ULS. Examples include HCI studies of how diverse groups of users use diverse parts of such systems, or computational intelligence applications to such systems.

She gave some examples including the smart grid, climate modelling, intelligent transportation and healthcare analytics. Actually It is not clear to me that climate modelling necessarily fits the definition. It may have large volumes of code, and run in a distributed manner, with federated models, and quite a few stakeholders and policy domains, but do a majority of the other factors above apply? Perhaps.

From my perspective, key to ensuring that ULS systems can be build and work properly are to apply the following techniques and technologies. However, in order to do this we need to properly educating computer scientists and software engineers with knowledge about these items that we know today, but which is not universally taught, and hence not applied:

  1. Model driven development (with tools that generate good quality code in multiple languages and for multiple device types)
  2. Distributed software architecture and development
  3. Rugged service interfaces so subsystems can be independent of each other, and have failsafe fallbacks
  4. Test-driven development: Where requirements are unknowable, it is still possible to specify those parts of systems that can be understood with rigorous tests. Subsystems so-specified can then be confidently plugged together as requirements evolve.
  5. Spot-formality: Formal specification of parts of a federated ULS system that are critical to safety, the economy, or the environment. 
  6. Usability and HCI to ensure that the human parts of the system interacts with the non-human parts effectively.


My Umple research helps address item 1, and is moving towards addressing items 2, 3 and 5. We deploy item 4 and 6 in the development of Umple.

Sunday, May 19, 2013

Some lessons from MiSE at ICSE

I just finished attending the two-day Modeling in Software Engineering workshop at the International Conference on Software Engineering in San Francisco.

Here are some of the take-away lessons for me (these do not necessarily reflect the ideas of the speakers, but my interpretations and/or extensions of their ideas)

Industrial use of modeling: There was very interesting discussion about the use of modeling in industry, but there seem to be two key and related directions for such use: Michael Whalen on Saturday gave lots of examples of the use of Matlab and SImulink in various critical systems (and particularly the use of StateFlow). Lionel Briand, on the other hand talked about using UML and its profiles to solve various engineering problems, again, however, he mostly focused on critical systems. In a panel he pointed out that most of the Simulink models he had worked with are just graphical representations of what could just as well be written in code (i.e. with little or nothing in the way of additional abstraction).

What struck me was that both presenters, and others, seemed to embrace what I might call 'scruffy' modelling: Briand talked about users adapting UML to their needs, and others talking about SImulink as  a tool that does not have the formal basis of competing tools, but nonetheless serves its users well.

Many people in the workshop pointed out that we need to boost the uptake of modelling. Various ways to achieve this were emphasized:

  • Improve education of modelling
  • Build libraries of examples, including exciting real-world ones, and ones that show scaling up
  • Make tools that are simpler and/or better so more 'ordinary' developers will consider taking up modelling
  • Allow modeling notations to work with each other and other languages and tools

It turns out that all four of these have long been objectives of my Umple project. So it seems to me that if the Umple project pushes on at its present pace, we stand to have a big impact.

Speaking of Umple, I gave a short presentation that seemed to be well received, although my personal demonstrations to a number of participants seemed much more effective with people appearing to be quite impressed. The lessons from this is that people really can see the advantages of our approach, but a hands-on and personal approach may work best, as a way to help people see the light.

Context: Another theme of the MiSE workshop that repeatedly appeared was 'context'. Briand pointed out that understanding the problem and its context is critical before working on a model-based solution; the modelling technique to be used will depend deeply on this context. Context can be requirements vs. design, or the specifics of the domain, the fact that space systems must be radiation hardened, or some aspect of the particular problem.

In my opinion, they are certainly right: Understanding the context is critical, and the tool, notation or technique needs to be selected to fit the context, However I also believe that we need to work on generalities that can apply to multiple contexts, in the same manner that general-purpose programming languages can be used in multiple contexts. For example the general notion of concept/class generalization hierarchies can be applied in almost every context, whether it be modeling the domain, specifying requirements for the types of data to be handled, or designing a system for code generation. I think state machines can also be applied in a wider variety of contexts, where people currently do not apply them: They are applied in many real-time systems, and they have been applied for specifying the navigation in user interfaces. But in my experience they can be applied in systems such as in this Umple example.

Testing: An interesting theme that came up several times related to testing: It was pointed out that it is worthwhile to generate tests from a model, but it also must be respected that in the context of a model used to generate code, these tests serve only to verify that the code generator is working properly! Such tests do not validate the model. Additional testing of the system is always essential.

Semantics and analysis: There was a lot of agreement that the power of modeling abstractions can be leveraged to enable analysis of the properties of systems. To do this however, it seems to me that semantics needs to be pinned down and better defined. 'Scruffy' use of UML and simulink seem to detract from these possibilities. Again, one of the objectives of Umple is to select a well-defined subset of UML, to define the semantics of this very well, and and to be able to analyse system designs in addition to generating systems from the models.


Saturday, April 6, 2013

Tips for doing well in a science fair from a long-time judge, and long-ago participant

For many years, I have been a judge at the Ottawa Regional Science Fair. I was also once a judge at the Canada Wide Science Fair. From grades 7-12, I entered science fairs every year and won some prizes.

The following is a bit of wisdom for youth who want to do really well, impress the judges and win prizes.

1. Start really, really early. For example, if the science fair is in March, think about your project and get going in January, or even October. I am always sad when I am judging a project, notice a problem, point it out, and the student says, "yes, I know, I noticed that too, but it was too late, I was doing the experiment just a couple of days before it was due". Think about entering a science fair just like you would a sports contest: Plan to enter and take the time needed to get better and better at it. Don't treat it like a piece of homework. When I was a teenager, I actually worked on one project over three years, and entered different 'phases' of the project as I got better and better.

2. 'Know your stuff' well. Spend extra time reading books from the library, reading on the Internet talking to your parents (if they know about science), talking to your teachers, contacting real scientists by email, and so on. Look up things you don't understand.

3. Be imaginative and try out different things: The more creativity you show, the more you will impress the judges. This goes back to the first point: To be creative, you need time to try out the ideas you have, and maybe even to start again, or explore different approaches if the first approach doesn't work.

4. Avoid doing experiments that are exactly the same as others have done, or that come straight out of books and websites. Certainly it is good initially to try experiments that you copy from others, to learn how to do science. But for a science fair you want to change things a little and try different variations from what others have proposed.

5. Start small, and then add more and more to your project as you learn more and get better. When I was a teenager I learned how to make some electronic circuits from a kit and got pretty good at making them work. Then I got a book that told me how to design electronics, bought a bunch of components and made a very complicated system. It looked really impressive, but it didn't work properly. What I should have done would have been to start with something very small, get it working, and then repeatedly try something a little more sophisticated. By the way, I did win a prize for my system that 'didn't work', but I might have one a bigger prize of I had approached it more slowly, getting each new bit working as I added it. The same advice applies if your project is a computer program: Start with a simple program, and get it working. Add a little more and get that working. Keep doing this repeatedly. We call this approach 'agile'.

6. Make sure you learn key aspects of the scientific method if your project is an experiment. The following are some examples:

  • Test with more than one of each thing. So, for example, if you are growing plants with three different types of fertilizer, don't just grow three plants, see if you can grow 9 (three of each). If you don't do this, and one ends up being smaller or dying, you don't know whether it was because of the fertilizer, or because it caught a disease, or just was slightly different naturally.
  • Repeat your experiment. This is similar, to the ideas is that you try your whole experiment again to ensure you have the same result. You obviously need time to do this.
  • Make sure you have a control. In the above case, that would mean one group of plants has no fertilizer.
  • Make sure you keep everything else constant. In this example, that would mean that all the plants get the same soil, pot, sunlight, temperature, etc. Each plant should start out the same size as well (e.g. from seed).
  • Make sure you measure everything relevant. In the plant example, you might measure growth every day, but you could also measure the colour and shape of the leaves for example.
  • Use the right measuring tools and practice measuring so you know you are getting the right measures. For example, I judged a science fair where three different projects needed to measure the amount of salt in water (salinity). One of them measured pH (acidity) instead, another measured density instead, but the third got a kit for measuring salt in pools. That was by far the best choice. And report your results using the metric system: This is what scientists all around the world use. 

7. Don't get your parents to do the work for you. Use your parents for advice; have them help with tricky things, but don't let your parents take the lead. By all means do some projects with your parents, but for the science fair you need to show what you have done mostly independently. Judges can almost always see  'parents work'. It stands out as sophisticated stuff that the student can't really explain fully.

8. Make your display look really nice. Use graphs, photos, and diagrams. Give nice headings, organizing different aspects of what you are presenting such a  'Background', 'Hypothesis' (the main idea you are testing out, 'Method', 'Results' and 'Conclusions'. Emphasize key points and words using colour, bold type, etc. Where you are showing text, make sure it is in big print, big enough so somebody can read it who is standing a about 150cm away. Don't write paragraphs or even full sentences: Just right abbreviated points. If you want to also say things as paragraphs and sentences, put these in a separate report that you would show on your table.

9. When presenting, focus on what you did, your results, and your conclusions. Avoid taking too much time on the background (the judge can read that or ask you questions), and avoid talking too much time talking about unrelated topics. Several times I have judged environment projects where the students did a nice experiment, but they spent a lot of time in their presentation focusing on the bad state of the world's environment, rather than the details of their own project.

10. Don't ever read from a script: Presentations work best when you are talking freely (extemporaneously). If you find this hard, practice over and over.

11. Accentuate the positive. If you have had results that have partly worked, and partly not, be honest and admit that you were only partly successful, but emphasize your success. I had one case where a student said his experiment didn't work (he had expected water to be was completely desalinated) when in fact he could have said, "I reduced the salinity by 75%". In my own system I talked about in point 5 above, I focused on the bits that did work.

12. Learn the 'rules' of the science fair. For example, the chemicals, electrical devices and water you can have on display will be limited. You need to know this so your whole exhibit won't be rejected for safety reasons on judging day. Have photographs (printed or on a computer) of any equipment you cannot display. Make sure you also know how wide and high you can make your display; often you are allowed to make a higher display than you might think. That can give you more space to display interesting things. When I was a youth, I made a double-high display with pull-down 'blind' type additional information one year; I won a trop to the Canada Wide Science Fair. Next year at the Canada Wide Science fair, almost everybody had tall displays.

13, Practice presenting your project in front of others before judging day: Make sure you can describe it in the allotted time (e.g. 8-10 minutes). Have others ask you unexpected and challenging questions so you can practice giving answers 'on the spot'. The others could be parents, other teachers, cousins, uncles and aunts: Just ask people of they are willing to be an audience.

14. Remember that regardless of whether you win a prize you have won by learning a lot about you subject, learning how to do science, and learning how to work independently.

Monday, February 25, 2013

Solar power has a bright future - provided sensible government policy is applied

This morning in the Oil Drum there is an excellent article on pricing of solar power.

Takeaway messages from this article are:


  • Solar power prices are now in many markets lower than what consumers pay for electricity on the grid. This is due to dramatically reduced prices of panels and inverters due to economies of scale and technological improvement. This trend will continue; just as computer prices trend down as technology improves, the same thing will happen for solar photovoltaics.

  • Due to the above it now pays to install and generate your own power at sunny southern latitudes and the positive-payoff geographical regions will steadily expand (latitude is the biggest factor, but cloud cover is also an issue). Hence more and more people will install such systems, including both private consumers and companies. In the long run this bodes very well for lowering fossil fuel consumption and reducing future climate change.

  • The market for producing solar equipment has shifted to low production-cost markets, as happened with other technology products. That is harmful to the production industry in developed countries, but on the other hand the installation industries should continue to experience growth and profits due to demand for installation, and energy-intensive industries will benefit from cheaper power. Ultimately there should be tremendous net gains to economies that encourage installation.

  • Governments have been fouling up marketing by suddenly chopping feed-in tariffs. These are fixed rates paid for electricity produced on your rooftop. The problem was that these were set at extremely high levels, and then governments realized that with the dramatically lower costs of solar production, the tariffs were way too high. However, rather than cutting them entirely, they need to be brought down to sensible levels, so it is still possible to sell to the grid. Society will benefit tremendously from having a solar generator on most roofs. But since the sun only shines some of the time, and on sunny summer afternoons such installations make much more electricity than needed in the underlying building, it is necessary to sell excess power to the grid. Without this ability the impetus to install is significantly reduced. Rates should be set at an economically justifiable level that changes over time and that is sufficient to ensure people will install systems, but also ensures no windfall profits.

  • Governments also need to set up the right environment for investment in transmission and storage of solar-generated power.

  • Even going off-grid entirely (which requires setting up your own storage system) is beginning to become an attractive option, and will become more attractive over time for all consumers. 

  • The market for electric cars will be boosted in tandem with in increasing installation of solar photovoltaics, since recharging your own vehicles will result in big cost savings, and also your vehicle, when not in use, serves as a storage.

  • Systems installed today can be expected to last 10-20 years with significant maintenance (inverter replacement) at about 10 years. However as with all technology, reliability is likely to improve, so even longer time horizons may be possible, and systems installed today may last longer than expected.


The article has a lot of very interesting equations that can be used by businesses, consumers and economists to properly work out the business case for solar power.

Wednesday, February 13, 2013

Why many queries about God refer to the Ottawa Senators and Daniel Alfredsson

Today I have been asked to appear on CBC radio to explain why Apple's Siri is responding to certain questions about God with answers that imply that Daniel Alfredsson is God! Here's a link to the Podcast URL from CBC Ottawa's 'All in a day' show, which featured the interview.

The questions 'What does God Look like' and 'Show me a picture of God' show the following.



When Asked 'What is god's home town?', the reply is Gothenburg Sweden.

When asked what team does God play for, the response shown below is: 'The senators defeated the Sabres by a score of 2 to 0 yesterday'



My guess is that this is happening for one of the following reasons:


  • Someone at Apple (a Sens fan) of a small group has planted this deliberately.
  • A bunch of people on the web have tagged  Daniel Alfredsson on the web as 'God' (or someone has been quoted as referring to him as God) and Siri is finding this information and making the wrong inference.
  • It is a random bug in the software that Siri uses (less likely)
Note that even Watson, of Jeopardy fame, made some errors, and Siri isn't anywhere near as sophisticated. Most questions on Siri about God, turn up answers indicating the 'religion is for humans' or proposals to do a web search for the answer. This happens when you ask for photographs of God, for example.

Monday, December 31, 2012

Steady progress developing Umple

I would like to end 2012 by highlighting how the Umple model-oriented programming technology is progressing.

Numerous people have worked on Umple during 2012 including my graduate students Hamoud Aljamaan, Sultan Eid and Miguel Garzon as well as several former graduate students. Twelve UCOSP students fixed bugs and added many small features. UCOSP arranges for fourth-year students at many Canadian universities to work on open-source projects as their capstone course project. They have all been logging their progress.

Over 68 issues were closed in 2012, and many more have been moved into the 'Mostly done' status. Some of the key changes include:

  • Many more error and warning messages (such as this) to help users create correct Umple code (e.g. detecting duplicate attribute names).
  • Greater stability and functionality for UmpleOnline: It works better between browsers and looks much nicer, with syntax highlighting.
  • Arguments to transition events on state machines, and various other state machine improvements, such as automatic transitions upon completion of a do activity.
  • Immutable attributes and associations.
  • Automatically sorted associations.
  • Generated code that indicates the line number in the original Umple files where it came from, allowing editing and compiling of Umple, with error messages in Java pointing back to Umple line numbers.
  • Improved documentation, including an API reference, and a generated grammar document that is nicely coloured.
  • Passing through comments from the Umple source to the generated Java.
  • Command line arguments, such as controlling the language generated, and ordering Umple to compile the generated code.
  • Ability to embed an UmpleOnline diagram, or textual Umple, in a web page.
  • Numerous bug fixes.

Work well underway includes:

  • Adding constraints to Umple. Simple constraints where you specify and attribute, a comparator and a value are working, e.g. [age > 18]. These will prevent setters from violating the constraint.
  • Adding C++ code generation. Most of the pieces are in place, although it is not compete yet.
  • Adding a comprehensive tracing capability (Model-Oriented Tracing Language) to allow injection of trace directives at an abstract level.
  • Adding basic SQL generation.
  • Adding a capability to reverse engineer code into umple (umplification).
  • Generation of state machine diagrams using the -g GvStateDiagram option.
  • The UIGU tool for generating user interfaces from models (this is essentially complete, but has been found to be too inefficient for widespread use, so it needs refactoring or rewriting).

In addition to making progress on the above, work planned for the near future includes:

  • Adding Autosar, multi-threading and real-time concepts to Umple.
  • Adding generation of formal specifications, as well as formal specification of umple semantics.
  • Research into comparing umple to regular code using metrics.
  • Further research into usability of Umple
  • Ability to debug code in Umple without relying on looking at generated code
  • Many more example systems
  • Automatic layout in UmpleOnline, using GraphViz.

Ongoing news about Umple can be found on Facebook and Google Plus. An up to date analysis of Umple can be found on Ohloh.


Tuesday, December 18, 2012

Evidence of climate change: Snow depth and chance of white Christmas decreasing

When a major storm hits, some people say 'climate change'. But should avoid doing so, since individual events by themselves, no matter how record-breaking, cannot be used to determine a long-term trend.

Telling evidence of climate change can only come from looking at long term changes, i.e. comparing certain averages from lengthy periods many years ago to the same data for more recent periods.

Environment Canada has a very interesting page documenting the likelihood of a white Christmas. In it they compare data from the period 1992-2011, to the period 1963-1982 (my childhood to my adulthood). The results are startling.

Snow depth: Out of 39 cities, 33 have seen a decrease in snow depth, and only four (Vancouver, Victoria, Hamilton and Brandon) have seen an increase. 12 cities now see less than half the snow they used to on Christmas day: The following are the percent changes in depth:

Fredericton -72.7%
Halifax -70.0%
Kamloops -63.6%
Saint John -63.6%
Moncton -60.9%
Penticton -60.0%
Charlottetown -58.8%
Sarnia -55.6%
Stephenville -52.2%
Quebec -51.2%
Kelowna -50.0%
Montreal -50.0%

Chance of a white Christmas: 31 of the 49 cities have seen chances of a white Christmas drop. Only three (St. John's, Victoria and Vancouver) have seen an increase. Sarnia holds the record, having a 56% lower chance of a white Christmas; Toronto Airport is not far behind at 46% lower.

This is clear evidence of climate change right across Canada. The regional differences are particularly interesting. Decreases in depth can be noted from Whitehorse to most of the Prairies, throughout Ontario, Quebec, and the Atlantic provinces.

Thursday, November 22, 2012

Career slowdown due to childcare is a human rights issue for academics and other professionals

This morning,the Globe and Mail has an article about a report from the Canadian Council of Academies discussing the difficulties female faculty members have advancing their careers.

In general, the process of academic advancement does not mesh well with raising families, whether you are a female or a male.

In order to achieve tenure and promotion, professors are supposed to continually build a publication track record. Taking a break, or 'slowing down', just doesn't work.

One can't properly take maternity or parental care and expect to advance. Indeed female colleagues of mine routinely work during their maternity leave. One even has had her nanny with her in her office looking after her young babies, just a few weeks after each child is born. Why is such leave often impractical: 1) You have to maintain supervision of graduate students; you can't just abandon PhD's in process; 2) You can't just abandon research programs you have carefully negotiated since you often have deliverables or expectations from research clients. 3) It often takes 12-30 months to get papers published in top conferences and journals; you have to keep that process moving, and attend the conferences when papers are accepted. 4) The process of hiring and getting new PhD students going can take 1-3 years; if you wait until you get back from a maternity or parental leave, you have a long gap again before you have a productive research team (and heaven-forbid that you might have another child on the way).

After any statutory or negotiated leaves in a baby's first year, you have a fixed number of courses to teach, so any slowdown while children are young inevitably is deducted from your time to do research and write publications. Slowing down 33% (e.g. from 60 hours a week to 40 hours a week) due to  family responsibilities, might mean reducing the research time from 30 hours a week to 10, a 75% cut. Hardly anyone realizes this consequence.

Interestingly, the Law Society of Upper Canada (Ontario), which had a progressive policy towards enabling female lawyers to take time of for childcare by helping cover their office expenses during their absence, is poised to drop that policy.

This problem, however, is not exclusively a female problem. It affects men too. It contributes to divorce when male academics (and lawyers or other professionals) are unable to take their share of the childcare and family workload. It leads to family-oriented men getting left behind in the career ladder, or simply deciding not to take opportunities that otherwise they would have done.

I have personally found that I have not been nearly as successful in research since having my three children. The sleepless nights and other family tasks have slowed me down very considerably. I know this is the case for other male colleagues. PhD students can be particularly badly affected. I was lucky to have made it to full professor just at the time my first child was born. I believe I might never have made tenure even if I had had children earlier. It must be so much harder for women who have a greater biological imperative to slow down their career.

Institutions and society must recognize this issue, especially now that women make up the majority of students in most academic disciplines. I have seen too many women professors just leave because of this issue, or decide not to take on higher-level responsibilities. And my graduate students both male and female, that have had children, inevitably have huge drop-offs in research performance.

It must become a violation of human rights to not consider childcare and family in promotion, and to not as an institution or profession have active programs in place to accommodate employees and members while their children are young.

Universities and granting agencies, for example, should explicitly have policies that expect and account for 60%+ drops in research productivity when people have young children. Co-supervision of graduate students should be the norm for essentially all graduate students. And when I talk about young children, I don't just mean babies and toddlers; the productivity effect of caring for family may slowly drop off, but it doesn't really ever drop to zero, and should continue to be accounted for until children are capable of travelling by themselves to activities and looking after themselves at home when required.

Friday, November 16, 2012

Yes the provocation and arms buildup by Hamas is intolerable, but aren't there other ways for Israel to respond?

The average person in the world today sees Israel's bombing and military buildup and finds it hard to see the justification for the amount of force being deployed. This will just fuel the hate against Israel, which is not in Israel's best interest.

Yes, elements in Hamas have been bombing Israel for the last year with relatively ineffective "boring" rockets that nonetheless terrorize Israeli citizens. Yes, I accept that Hamas is becoming more brazen and starting to deploy more effective weapons that need to be stopped. Some level of right-to-defend is justified. Yes, there are a great many people in Gaza with fundamentalist attitudes that demand the destruction of Israel.


If the world is supposed to believe that Israel is a mature democracy, can't Israel try other tactics. Some of the following come to mind:
  • Call for United Nations resolutions every time they are bombed.
  • Instead of bombing with explosives, bomb with millions of 'propaganda' leaflets explaining to the Gazans how evil the Hamas bombing is. Or bomb with devices that have loudspeakers that explain to Gazans what is going on.
  • Unilaterally stop the tit-for-tat for a few days to see what happens. If Hamas rockets continue, give a burst of intense response at a pre-announced time, but then stop again. Accelerate this, but always show more restraint than Hamas.
  • Call on elder statesman from other countries (US, Egypt, Turkey, Jordan) to get together to help figure out how to get Hamas to stop the provocation and rhetoric.
  • If they have to bomb (perhaps because they have intelligence about more sophisticated weapons), announce 10 minutes before where they are going to bomb, so people can escape but relatively little military hardware can.
  • Parachute in water and extra humanitarian aid to let the people of Gaza think a little about their intentions.
  • Post on an interactive website exactly where they aimed, and what they intended to hit, for the world to see.
Some of these ideas may be impractical. But surely there most be something other than pure intense military response that is vastly greater than the rockets that Hamas has been using. Israel has so much stronger an army, superb anti-rocket military defences, powerful friends around the world. They don't have to destroy their reputation and foster ever more hatred among neighbouring countries by overdoing their responses.

There is just no way that the current action will serve as a 'deterrent'; angry people are not deterrable.  If anything the current action will just stoke up more attacks on Israel for years to come and risk wider war and more Israeli deaths. It is clear that the action must be aimed at destroying weapons Israel knows or suspects to be present. And perhaps they have to be careful not to reveal the source of their intelligence, hence precluding some of my ideas above. Nonetheless, showing restraint and waging an intense propaganda war surely must be viable.