The human context of quality

Spec Explorer now available for Visual Studio 2012

Posted on 2013-Sep-9 at 07:48

Last year I mentioned here that Spec Explorer, Microsoft's model-based testing extension for Visual Studio, seemed to be on hiatus. I posted a request on User Voice for Microsoft to release an update to make the extension compatible with Visual Studio 2012.

And so they have. This summer an update to Spec Explorer was released. It can now be downloaded from the Extension Gallery in Visual Studio 2012 or from here:

http://visualstudiogallery.msdn.microsoft.com/271d0904-f178-4ce9-956b-d9bfa4902745/

Thanks to anyone who voted for that request.

Spec Explorer future in question

Posted on 2012-Dec-28 at 08:51

I have been using Spec Explorer 2010, a model-based testing extension to Visual Studio 2010, to generate test cases during the past year.

Since Visual Studio 2012 was released I have been wondering if Microsoft is planning to support Spec Explorer in Visual Studio 2012. I noticed they archived the Spec Explorer 2010 forum earlier this month, so I posted a question asking if Microsoft is planning to support Spec Explorer in Visual Studio 2012 in another forum for Visual Studio testing tools. A Microsoft employee directed me to their Visual Studio User Voice feature request page.

I posted a request here:

http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3497685-support-spec-explorer-in-visual-studio-2012

It seems strange to have to request for a company to keep supporting a tool that came out of its own research program, but that is how things are done at Microsoft these days I suppose.

If you use Spec Explorer 2010 and would like to see it continue to be supported in Visual Studio 2012 and future versions, please visit the above link and vote for this feature request. Thanks.

Model Based Testing with Spec Explorer

Posted on 2012-May-9 at 07:09

This past week I started on a new team. I will be focused 100% on testing tools and automation as part of this new role. One of my first tasks was to learn more about Spec Explorer, a tool from Microsoft for model-based testing.

I have used the model-based testing methodology several times in my QA career, and I excited to try it again. I first learned MBT on a tool called TestMaster by Teradyne which was discontinued over a decade ago. Later, in 2007, I developed my own simple MBT tool in Java which I called Hanno and released to open source on Sourceforge. I had hoped to maintain and improve Hanno but I never found the time.

This time I decided to learn Spec Explorer, a MBT tool from Microsoft that runs inside Visual Studio. Most of our existing test automation code is in .NET so it made sense to try it. I also like the way you can build models in code and Spec Explorer generates the state machines automatically for you. Most model based tools go the other way, building from model to code not the other way around.

Spec Explorer was surprisingly easy to learn. Many of the examples that came with it were overly complex, but I found that by stripping out the parts I did not need I could build a model from the ground up and model a very simple sequence of actions in Coded UI for one of my employer's web applications. Doing that helped to understand how Spec Explorer thinks, or at least how to make it work the way I think for UI testing.

I am looking forward to using Spec Explorer in the coming months, and using it to find some interesting bugs.

Performance testing checked off

Posted on 2011-Oct-12 at 02:55

Last June I took a new job as a Software Engineer in Test at a new company. It was a good opportunity to get my career on a different track, and over three months later I still feel pretty good about the move.

Almost immediately I became part of a team working on performance testing for a major software release. I had never done performance testing before, and the product (a web application written in Java) had never been tested that way before either, so it was a good opportunity to learn and do some work that was very visible and much appreciated. After hundreds of hours of running and monitoring tests though, I am pleased to say that project is finished.

 

The project was different in that it did not focus on the usual "user profile" type approach to performance testing. The load tests did not attempt to create "typical" user activity. Rather they focused on a particular part of the application to put load in that area, while a Coded UI test was doing similar operations in the same functional area, and the response time of the Coded UI test was measured precisely. Also, the tests were run at various loads and comparing new code (after a major software upgrade) with old code to determine where performance had improved or degraded. We also tested with various debugging tools running to track garbage collection, blocked threads, and such. In the process our team found and fixed several performance bottlenecks (due to synchronized code blocks that should have been thread-safe instead).

 

I used MS Load Test in Visual Studio 2010 with the individual tests being MS WebTests. I had not used WebTests before, and while they are very powerful, their limitations quickly became clear also. Because WebTests operate on the HTTP level, operations that require interaction with JavaScript dialogs (for example) simply do not get recorded and cannot playback. Also, for complex applications like the product I am testing, using Extraction Rules is essential. I had to write several custom extraction rules in order to get all the tests to work.

 

I have been told that for the next release I will get my pick of what I want to work on, having taking on a tough task for the team. I am glad to be able to say I have done performance testing, but I do not want to make a career of it.

Visual Studio 2010 test tools: Lessons from the trenches

Posted on 2011-Feb-17 at 12:01

Our QA team has been using the Visual Studio 2010 test tools (including Microsoft Test Manager) for several months now, and we have gone through a couple of releases with the tools. While the tools generally work well, it feels at times like we are beta testing for Microsoft. I have logged several bugs against the Test Manager, to Microsoft's credit they have been responsive to those. We also have found situations where the tools do not work the way expect them, and have had to adapt our processes to match or work around Microsoft's thinking about how the tools should be used. Here are a few items that might be of interest to others who are using or considering using these tools:

  • Test Manager does not allow carraige returns in a test step. Here is a way to work around that. Write your test cases in Excel and use the Test Case Migrator Plus tool to import your test cases. Migrator Plus imports the test cases as they were formatted in Excel, including carraige returns in the test steps, if any. If you edit the test steps later in Test Manager, you cannot add additional carriage returns, however.
  • Microsoft assumes that test case work items and bug work items reside in the same TFS project, if you want to be to able to submit bugs from the Test Manager, you have to set up your projects in TFS such that your test cases and bugs are in the same project. Some features will work across TFS projects, but many will not.
  • Similarly, if you want to use the Recommended Tests feature where Test Manager will recommend which tests to re-run, your source code and your bugs will have to be in the same project.
  • When linking Issue (or other bug type) work items to Test Case work items, DO NOT use the Test Case link type. This is for linking Test Cases to Shared Steps and will not work correctly when linking Issues to Test cases. We use the Tested By link type on the bug work item side, this creates a Tests (verb) link type on the Test Case side of the link. As in "this test case TESTS this bug, this bug is TESTED BY this test case".
  • If you associate your test plans in Test Manager with TFS builds, be aware that the default retention policy for TFS builds is to retain only the last 10 builds, and also to delete the test results for any deleted builds. If you want to retain your old test results indefinitely, make sure you set the build retention policies accordingly. Otherwise your builds and test results will vanish out from under you, and you will scratch your head wondering where did my test results go?
  • Only one suite of automated tests can be run from an instance of Test Manager at one time. In Quality Center it was possible to launch multiple sets of automated tests at the same time. Test Manager does not work that way. Something to keep in mind if you are coming from using Quality Center and QTP.
  • Coded UI tests work well generally, but to make them flexible you have to use test parameters in your linked test cases in Test Manager. SearchProperties are also a must to learn to use effectively. These enable a test to be changed at runtime from what it was recorded, without changing the recording or UI Map. Microsoft assumes that you will re-record the tests when the application changes, this is a naive assumption, but the above features make that largely unnecessary.
  • When setting up a TFS build server, you can run the build service as a Windows service or an interactive process. If you want to run your automated tests at build time, you will need to run the build as an interactive process. I do not run the automated tests are run time, so I run it as a service which builds the automated tests but does not run them at build time.

In general I would say that the Microsoft testing tools are not bad for a first generation set of tools. The MSDN forums and developer blogs will be very helpful in understanding what to expect and to ask for help. If you find a bug, the Microsoft Connect web site is where you want to log it.

 

Good luck!

Seeing the wider context

Posted on 2011-Jan-13 at 08:12
A couple of years ago I attended a conference of the Association for Software Testing. It was a new organization and I  was curious about it, also the conference was local and a friend of mine was presenting there. I remember leaving with a sense of disappointment. I let my membership lapse after a year, but stayed on their mailing list just to see what it would evolve into. Recently I dropped off that too.

Somewhere along the line, the AST has posted on their web site that they are not only about improving the software testing profession, but specifically they are now also about promoting the set of ideas known as context-driven testing. At the core of this set of ideas is a reasonable and fairly obvious one: testers do what we can with whatever we have to work with. But also attached is an idea I find unreasonable: that there are no best practices in software testing, and never can be, because context is all. What works at one company will not work at another, there is no "best" that can be defined across the different industries where software and systems are deployed. "Best" is local, in other words whatever your boss says it is today.

I do not support this concept at all. I find it defeatist and depressing. It amounts to not only an admission of powerlessness but a celebration of it. In this view, testers are no more than servants of their present employer, who is to be viewed as a given, fixed, not to be changed, because that would imply a standard against which to drive change.

In fact, a business consists of human beings with whom testers have many common interests. Also, businesses operate in a larger context which includes other industries, humanity as a whole, the planet on which we all evolved. Employers are sometimes the first to recognize that they do not have all the answers, and may look outside their own company and even outside their industry for expertise.

I agree that the processes used in testing a spacecraft would fail if rammed down the throat of an internet company unused to such rigor. But that hardly means that someone with spacecraft experience has nothing to offer in that context. Or vice versa. Witness the rockets being built these days, surprisingly well, by former Internet billionaires. In any case, ramming, or what the article calls "context imperialism" is not the only choice.

It is possible to take more of a "Tao" approach. Accept your employer and co-workers for who they are, and exert what influence you can, overcoming resistance gradually through erosion, while learning from them and from the results of your efforts to move toward "better" together over time. The fun part is when your co-workers start moving in the direction to which you have been pointing all along, on their own, and faster than you could have hoped. I have seen it happen.

In some cases, the employer will not be open to learning from you. I have seen that too. Then personal context comes into play. How desperately do I need this job? If their way of doing "quality" so far outside my comfort zone it is affecting my personal life? Will I be able to endure the discomfort long enough to effect change in either the employer or myself? Do I want to become the person they are trying to change me into? These are tough questions, and every tester has the right to ask them, and to find their own answers.

My point is, there is a wider context for the software testing professional to consider than their employer, there is a professional context, a humanity context, as well as a personal one. The wise tester needs to be aware of all of these.

TFS and VS 2010 Test Tools - Making the Transition

Posted on 2010-Nov-23 at 11:19
Earlier this year, I recommended to my employer that we transition our testing from HP Quality Center to Microsoft Team Foundation Server 2010. I am not generally a Microsoft fan but this move actually made sense for our company on several levels, and our management agreed. This past month we began that migration.

The first step was done by our Development team, who upgraded our Team Foundation Server from TFS 2008 to TFS 2010. A new project was created in our TFS server for our QA team to use. Some of the existing projects had to be modified to make them aware of the new work item types in TFS, such as the Test Case and Shared Steps.

We used an export tool from Juvander to pull our manual test cases out of Quality Center to Excel format. We then used a Microsoft tool to import from Excel to import the test cases into our new TFS project. Our team is now using Microsoft Test Manager for our manual testing, though we are still in the learning stage. One downside is that since we are sharing a server with our Development team, performance can be slow at times.

In order to fully use the new testing features, we will need to set up a Test Controller to define our test environments and test machines. This is also required for automated testing.

Getting our automated testing migrated from Quicktest Pro to Visual Studio 2010 Coded UI tests is going to be a separate manual effort. Apparently the Coded UI test recorder does not record events (or most of them at least) generated by a QTP playback. That would have been a helpful shortcut to get started, but as I expected it looks like we will have to re-record all our automated tests from scratch.

Overall the move will provide some benefits such as being able to link test cases to bug and requirements work items, which we were not able to do before. Using the same programming language as our developers (C# in our case) should provide other benefits down the road. The VBScript language that QTP uses is very limited, especially in its data structures. Using a "real" programming language for automation will be refreshing.

Diving into Visual Studio 2010 Test

Posted on 2010-Apr-15 at 03:33

I'm excited today because I have installed the released version of the Visual Studio Test 2010 tools. I am evaluating them as a possible replacement for QuickTest Pro and Quality Center in our QA organization.

I have set up a Basic install of Team Foundation Server 2010 on a server, and Visual Studio Ultimate 2010 on a test client. Visual Studio includes the Coded UI test functionality, along with the Test Manager tool for creating and recording manual tests. I was able to fail a manual test in Test Manager, log a bug from there, assign it to myself, and have it show up in Visual Studio where I could edit the bug, and see all the attached system information.

The Visual Studio Agents 2010 are also needed in order to run automated tests from Test Manager, I will be installing those sometime next week. So far so good though.

The unwatched pot

Posted on 2009-Jun-24 at 06:09

A year ago when I took an offer for a testing job in a Windows shop, I had mixed feelings. I had tested Windows applications before, but for most of the past 20 years I was a Mac user at home, and for a past few years tested Linux applications in my job. I wondered if at some point my experience and assumptions from working with other operating systems would bite me one day in a Windows job.

Recently I have been using the HP/Mercury tools QuickTest Pro and Quality Center to test a web application that runs under Internet Explorer.  I have been running Quality Center on a client running Windows 2003 server, and QuickTest Pro on a Windows XP test machine. I have also been running Remote Desktop on the 2003 machine to connect to the XP machine remotely.

I notices that my tests ran fine when I was watching them, but would sometimes fail if I walked away from my desk. Digging through online forums, I learned a couple of things:

* If a screensaver locks either workstation, QTP tests will fail.

* If the Remote Desktop window is minimized, QTP tests will fail.

* Certain tests fail when run from Quality Center via Remote Agent on the remote machine but not when run from QTP directly on the remote machine.

* QTP is generally unreliable when run under Remote Desktop.

As all this sunk in, I found myself wanting to scream. These behaviors deeply wrong to me, and I was not sure who to blame, Microsoft, HP, or myself for not realizing that what I was trying to do would never work reliably given the limitations of Windows. Since neither HP nor Microsoft are likely to accept responsibility for these flawed behaviors, I'll give myself a pass as well.

I have given up on Remote Desktop, and installed RealVNC on both machines and have also changed my screensaver settings on both machines. So far all my tests are running perfectly. I have adjusted my expectations though, to realize that unattended automation is not something that will work with QTP at all. Working with QTP was not my choice, and knowing what I know about it now, I'm not sure I would recommend it.

Test set properties caching issue

Posted on 2009-Mar-25 at 10:16

I've encountered an issue when using Quality Center and QuickTest Pro together.

I have been using user-defined test set properties in Quality Center to pass in test environment parameters for my QTP tests. When I want to change which test environment I run the tests against, I simply change the test set properties, refresh in QC, and run. It works great, most of the time.

Recently I have been seeing an issue where Quality Center ignores the current test set properties and passes cached values of the test set properties from an earlier run of the same test set to QuickTest Pro instead. This caching persists even after closing and re-opening both applications.

It's a bit like driving a car and suddenly your steering wheel decides that because the last turn it made was a right turn, from now on it will only make right turns.

I've noticed that sometimes QC has more than one connection active to the same machine, I wonder if it might be picking an old stale connection rather than the current one, and that is why it is sending old data.

If I figure out the cause and a solution I will post it here.

Update: The issue appears to be an integration bug in the Mercury tools. Apparently QC opens two connections to QTP. This is visible in the QC admin tool. When a test set is finished, QC leaves the connection open. If the connections are not shut down, the test set properties will be re-used, causing the caching behavior.

I tried using the Disconnect method of the QCUtil interface to shut down the connections, but that does not work. QCUtil only accesses one of the connections, it does not affect the second connection.

To prevent the caching behavior, it is necessary to shut down both connections at the end of a test set. The QCUtil interface only accesses one of these connections, so it cannot be used to shut down or reset both connections.

I found part of a workaround on the HP forum here. A QTP script has to launch an external VBS script that shuts down QTP.

I added some code to shutdown the connections using the TDConnection object in the QTP automation interface. This method shuts down both connections. The VBS file looks like this:

'Give time for QTP script that called this to finish

WScript.Sleep(10000)

'Access the running QTP application
Set qtApp= GetObject("","QuickTest.Application")

'Close the Quality Center connection, if any
If qtApp.TDConnection.IsConnected Then
  qtApp.TDconnection.Disconnect
End If

'Shut down QTP
qtApp.Quit

This code will tell QTP to close its QC Connection, and then close QTP. The next time a test set is run (even the same test set with different test set properties), QC will launch QTP with a new QC Connection, then QC will pass the current test set properties to QTP when it runs.

This workaround does not completely eliminate the issue, but it does help.

Deleting files in special folders

Posted on 2009-Mar-17 at 04:59

I was trying today to write some VBScript in QTP to remove all the files from the Temporary Internet Files folder on Windows XP.

There is a simple QTP command WebUtil.DeleteCookies for deleting cookies, which works for IE if there are no browser windows open. But there no equivalent command for deleting the temporary Internet files.

I found several examples online that use the DeleteFile command. One that actually works is listed here:

http://www.microsoft.com/technet/scriptcenter/resources/qanda/nov04/hey1102.mspx

The important part of this code is the second line:

Set objShell = CreateObject("Shell.Application")


This line has to be used because the Temporary Internet Files folder is a "special" folder does not behave like a normal folder. The normal method of deleting a folder will not work if the path to the folder is simply passed as a string.

Unidentified web objects

Posted on 2009-Feb-27 at 02:43

A bane to automated testing for web applications is having to test an object or page element that is not uniquely identifiable. I've run into this issue on several different projects and with different test automation tools, and have not found a general practice that will get around it. There may be specific hacks that get around a specific case (such as having to always select the 7th checkbox in a list, for example), but not in general.

This issue comes in several flavors, such as:

  • Multiple elements on the same page with the same identifier information.
  • Elements on a page with no specific identifier information.
  • Identifier information changes each time the application is run.

With QuickTest Pro, the result is often a failure to find the object, or an error message saying that multiple objects were found, and the tool has no basis to choose between them.

I ran into the first flavor today with a custom heirarchical menu that had three items (hidden under different parts of the tree, but still visible to QTP's descriptive programming) with every property identical exceept location on the page. To get around it I would have to write some very specific, fragile, custom code to find a list of objects and pick the right one based on relative coordinates.

I suspect that this is not an issue that any test automation tool is going to be able to fix. The testing phase is too late to fix this issue. It has to be fixed during development.

What is needed is a tool to analyze a web page during development, and flag any page elements that are not uniquely identifiable or persistent. This would then become a work item in the IDE similar to compile errors or code analysis errors. Such a tool should be enabled in IDEs and including in the build tools and should cause a build to fail if for any issues not marked ignore.  It would enable every build to be testable, as every element of every page would be identifiable (ignoring some element types of elements that are not tested, such as static text labels).

Fortunately HTML provides a simple mechanism for creating a unique identifier on a web page, the "id". Unfortunately, most developers and most web development tools do not use it or check for it. A tool that did would make the test engineer's life much easier.

The most annoying bug

Posted on 2009-Feb-12 at 07:35
Now into my second month of working with HP QuickTest Pro and Quality Center in my software QA job, the most annoying bug that I have run across is QTP's tendency to lose track of its connection to Quality Center for no apparent reason.

This happens when I am running test cases stored in Quality Center locally through QTP. It can manifest itself several ways:

* When saving your edits to Quality Center, a "General Error saving the test" message.
* When saving your edits to Quality Center, a "Save As" dialog appears which is useless because you cannot overwrite a test in Quality Center. You can only save the test locally, which is also useless because there is no way to get that saved test into Quality Center without losing history.
* When running a test that references a function library stored in Quality Center, an error message says the function is not defined.

When this happens the only option is to reload the test from Quality Center and lose any changes you may have made. But the next time you edit or run the test locally, it happens again.

I haven't found a pattern to it yet, but it seems to come and go. Once it starts, it happens every time you run or edit a test from QTP, you might as well call it a day, you aren't going to be able to edit or run from QTP.

This bug is inexcusable, and makes using QTP and Quality Center together much more difficult. It almost defeats the purpose of using the tools together. But I have no choice about that in my current job, so I soldier on as best I can.

Desktop shortcuts and target paths

Posted on 2009-Feb-6 at 04:15

Today I learned about an annoying property of Windows desktop shortcuts.

I need to access a Windows desktop shortcut that one of our applications installs, and modify its TargetPath property to make it point to a slightly different location.  The shortcut is implemented as a .lnk but points to a web page. This is not the standard way of creating web shortcuts but it does work. However, the TargetPath property of such a shortcut appears to not be accessible in the usual way.

For example, the following VBScript code will either create a shortcut (or update it if it already exists) then display the TargetPath property of the shortcut:

 Dim WshShell, objLnk, strLnk
 strLnk = "C:Documents and SettingsmyuserDesktopTest.lnk"
 Set WshShell = CreateObject("WScript.Shell")
 Set objLnk = WshShell.CreateShortcut(strLnk)
 objLnk.Description = "Test Me"
 objLnk.TargetPath = "C:Tests"
 objLnk.Save
 Msgbox(objLnk.TargetPath)
 Set objLnk = Nothing
 Set WshShell = Nothing

But the following code returns an empty TargetPath:

 Dim WshShell, objLnk, strLnk
 strLnk = "C:Documents and SettingsmyuserDesktopTest.lnk"
 Set WshShell = CreateObject("WScript.Shell")
 Set objLnk = WshShell.CreateShortcut(strLnk)
 objLnk.Description = "Test Me"
 objLnk.TargetPath = "http://www.google.com"
 objLnk.Save
 Msgbox(objLnk.TargetPath)
 Set objLnk = Nothing
 Set WshShell = Nothing

Both shortcuts work, however.

It seems that either VBScript or WScript is trying to be too smart for its own good, and is ignoring the TargetPath path for a LNK file that goes to a web page rather than file path.

Improved timer function

Posted on 2009-Feb-5 at 10:49

The timer function I posted last week has an issue in that the call to Exist(0) does not always return immediately. Here is an updated version that uses a MercuryTimer object to use elapsed time rather than number of loops. It is based on the article by Meir Bar-Tal posted here.

Public Function WaitUntilObjectFound(ByRef obj, ByVal intTimeoutSec)
 Dim objTimer, intTimeoutMSec
 intTimeoutMSec = intTimeoutSec * 1000
 Set objTimer = MercuryTimers.Timer("ObjectExist")    
 objTimer.Start    
 Do
  WaitUntilObjectFound = obj.Exist(0)       
  If WaitUntilObjectFound Then           
   objTimer.Stop           
   Exit Do       
  End If       
  Wait 1
 Loop Until objTimer.ElapsedTime > intTimeoutMSec   
 objTimer.Stop
End Function

QuickTest Pro first learnings

Posted on 2009-Jan-27 at 06:53

In the past month I have been working with HP QuickTest Pro (QTP) in my job. Our company makes a complex application which includes a web server application and a custom installer for client-side files. We have a library of automated tests in WinRunner that need to be migrated to QTP, so I have been working on that.

QTP claims that it can be used for "keyword-based" testing using record, playback, and object repositories. That approach can work but it is pretty limited, and I quickly had to move beyond it to use the "descriptive programming" feature in most of my code. In descriptive programming, objects are identified dynamically at run time rather than using an object repository. I was quite comfortable with this, because I have worked with web automation tools such as Watij where there is no object repository, all object identification happens at runtime.

Several reason why descriptive programming was necessary in my case:

  • In our company we have multiple "environments" including several internal test environments and production environment. These have different web addresses, so object repositories recorded in one environment will not work in another. I do not want to create duplicate tests for each environment.
  • During installation, our installer has two browser windows with the same title at one point, but slightly different URL. When using an object repository, the test would not pass consistently because QTP was unable to tell the difference between the two windows.
  • Our application is highly customizeable and looks slightly differently for different customers. Again, object repositories record in one customer environment would not generally work in another because of different window titles.

Descriptive programming with regular expressions and wildcards is helpful with dealing with these kinds of issues. I was able to write test cases that would work across multiple test environments, multiple customizations, and consistently find the right window.

Another issue I ran into was that our application has custom menus that are WebElement objects created dynamically on a mouseover event. QTP's Object Spy and Update Objects feature would not see them. I worked around this using the SendKeys function from Windows Scripting Shell. Not elegant but reliable.

A final issue I had to solve was that the internal test environments have different response times. I initially put Wait statements into the code to make sure that screens were present before acting on them. But because of the different timing this would sometimes not work and was inefficient because for long time the automation was doing nothing. Sync statement also did not always work because some of our pages update several times before they are really finished loading. So I wrote this:

Function WaitUntilObjectFound(obj, timeOut)
 Dim timeElapsed, returnVal
 timeElapsed = 0
 returnVal = true
 Do Until obj.Exist(1)
  timeElapsed = timeElapsed + 1
  If timeElapsed >= timeOut Then
   returnVal = false
   Exit Do
  End If
 Loop
 WaitUntilObjectFound = returnVal
End Function

This function will check for the existence of an object approximately every second and will return if the object exists or if the function times out. This code will work for any object that has an Exist property. It works great for waiting for child dialogs or child windows. It will return prematurely if the object you are trying to check the existence of is hidden by another active window. For instance, if obj is a main window, and a child window pops up, Exist will return false and the function will exit even though the main window still "exists".

I expect that most of this is not news to testers who have been working with QTP for a while. Record and playback is only useful as a first step, as a tool to learn how QTP sees objects. Descriptive programming is the only way to go for real world applications and robust test automation frameworks.

Greetings

Posted on 2008-Sep-23 at 05:25

Hello to everyone at SQAForums and SQABlogs. Here's a little bit about me and why I have started a new blog here.

I've been working in software testing and development for 15 years. I first became seriously interested in software in the early 1990s when, after years of studying mechanical engineering, I started my own software company. I learned two things from that experience: I am not a businessman, and developing software is a lot harder than it looks if you haven't done it before.

I landed in Boise, Idaho where I tested software and firmware for laser printers at Hewlett Packard. HP had a very traditional waterfall release model, and a strong quality culture in the laser printer group. I also spent several years developing in-house test management and automation tools.

In 2005, I left HP and moved my family to Seattle, where I have worked for several small companies with both traditional and Agile development models. I have done a lot of development and automation in my career, but made a decision to stay on the QA side of the industry in recent years.

The name of this blog is partly about how I feel about quality. I have mixed feelings about the "context-sensitive" way of thinking about software testing because all too often the "context" they are talking about is the business context. One way of expressing this view is to say that as a tester you can in the end only accomplish what the people running the business allow you to accomplish. The only best practices are the ones the developers and management in the company you are working for at the time will support.

That is true, but testers are also part of a larger quality community as well as society. Our work impacts our families and the families of the users of the software we test. That is what motivates me to get up in the morning.


To realize that you do not understand is a virtue; Not to realize that you do not understand is a defect.
Lao Tzu

Friends