Saturday, November 10, 2012

My Gift to myself : Startup weekend @ Cincinnati

After an year of trying, finally got to gift myself a weekend. Yes it was a long pending gift for some better retrospection and future re-expectation. Deliberately chose a location which was close to 4 hours away to give a enough time to meditate off the normal day today chaos.
I will try to put in the events in my day-o-day format.
Day 1:
Reached Cincinnati and drove directly drove to The Mets center. Location is a massive training facility and with an amazing infrastructure.
Registrations were pushed by 30 mins. Was the third one to check in. There you go the event kicks off

Networking, meet and greet. It was quite electric environment getting built up. Without much expectation I was living every moment and trying to absorb every bit of excitement of being with some nerds and agony of not taken any next steps towards my dreams. But that’s probably is a part of the plan – more or less.
Event unfolds. Usual anchoring starts with an anchor seemingly less prepared but still managing her ground. Quite interestingly,confirmed that she was the last minute pick for the MC. I don’t know what the heck that means…just can take a guess…must be something to do with anchoring the event.

Before the actual Day-1 pitch process there was a good trailer.Picking two random words and creating a startup (pitch) in 15 mins. Our words were “Bludgeoned” & “Princess”. We as a team were quite creative to carve a horny pitch out of the words we chose.
Then starts, the actual pitching process. i have had been intrigued always reading, seeing videos etc and hence  I definitely wanted to try my hands, and not leave any experience untouched in my first Startup weekend. I was one of the 17 pitchers (idea testers). There was couple of cool pitches. I personally like 2 and was quite amazed at the thoughts and catalysts for those.

Around the room, among the 50 current, to be entrepreneurs, I could see, everyone wanted a Developer, Designer on their teams, BUT no mention of tester. Felt this is the moment. So changed my pitch at last moment to give some mileage to the craft I am passionate of.
I think I did a good job of presentation, got a very solid feedback on the idea. But since it was a service driven idea, it was not “SEXY”.During voting, I expectedly got only 2 votes. All the ideas with >6 votes were supposed to create teams.

Now came my pivot. Quickly changed gears when I knew my idea is out of race, started to dig into the 2. I liked the most to be apart. Split up, went with my gut and joined the idea targetting the education of youth. We got together and there you go the actual fun started.
We got into the room and started the actual work I had been looking to. Our team had Ryan (idea owner), Andrew (Developer), Eric (Developer), Suraj (Developer),Mark (still unclear) and myself (Test architect)

After the usual round of introductions, talking about our strengths and weakness we hit the ground running. Spent the 1.5 hours to hash out the ifs and buts, bells and whistles and focus for the next 50 hours or so.
This was probably the meat of the day-1 effort, where we got disciplined and streamlined the features we would working on for the next 48 hours, workflow and prototyping strategy.

Key take away’s personally on Day 1

1.       There is a lot of passion around, so need to get addicted to the same

2.       Ideas are good, but it’s just a first step. Fun is taking the plunge and executing on the idea
Day 2…will be on its way Sunday morning…


Wednesday, August 22, 2012

Can Defect Density have an UCL, if so Why it should?

While doing my usual morning readings, in a totally separate context, i had a question that started to haunt me. I tried to ignore but then the urge to think about it got stronger every fleeting minute. Question was/is - About Defect Density? Can DD have a UCL, if so Why it should? I started to retrospect on experience i have gained, learning's i learnt and start to contemplate on various perspectives. My perspective still going strong compells me to say Since LCL & UCL are benchmarks set for a certain metric to track that means they imply . ....At least and At Max. W.r.t DD it is basically the ratio of total number of Valid defects to the total uniquely executed test cases. Considering the two premise, i feel DD is a measure of Code quality more than tester's ability to find defects. From testing perspective, it perfectly makes sense to say that we will find atleast x% defects (read LCL). But when we start to call out that from very same testing perspective this is the Max defects we can find, we start to tread a wrong path. Isn't it? Because there is no heuristics, no model, no methodology to tell the max limit of defects testing team (or tester) can find. If on different platform we can always can calculate the Defect Injection rate (the total minimum # of defects that will be there)...but NEVER MAX. Secondly Even a developer cannot give a commitment to the max defects that can be found in his/ her code. Hence my strong opinion has it that no matter what we can never set a benchmark for Max density (UCL). If thats the case, then WHY is it that while formulating the SOWs, SLAs we always put a LCL & UCL for Defect Density as well. Is it not the time for a challenge to our canned approach? Let me know your that we can create a disruption in the way we have been working all this while... - Manav Ahuja

Friday, June 08, 2012

Online Seniors? What?

There are over 21 million ‘online’ seniors (65+) only in US and can you imagine their digital trend: see below Picture Source:- Forrester Blogs Amazing potential and incredible opportunity....Entrepreneur within is jolting me :-) - Manav Ahuja

Saturday, April 28, 2012

SQL structures in Business Intelligence (BI) Testing - Top "10" learnings

Business Intelligence Systems are designed to provide strategic information for analysis. Some of the features of Business Intelligence Systems are:
1) Database designed for analytical tasks
2) Data from Multiple Source Systems
3) Read-Intensive Data
4) Availability of Current and Historical data
5) Ability for users to initiate reports

A few of the common terms used in BI space are:
a)Source Systems
b)Staging Area
c)Data Extraction, Transformation and Loading (ETL)
d)Enterprise Data-warehouse (EDW)
e)Reporting (Drill through reports)
f)UI components such as – Cubes, Dimensions & Measures
g)FACT tables, Dimension Tables

With the basics out of the way, lets jump on to core of the post – Top 10 potshots for solid SQL Structures:

1.Alias Names should be consistent across different SQL’s and should be sensible. Have a standard for alias names. For example –All fact tables should have alias names starting with ‘F’ and all dimension tables should have alias names starting with ‘D’. This enhances debugging

a.Fact_Company FCOM
b.Dim_Company DCOM

2.Use ‘ISNULL’ clause whenever there is data comparison between 2 columns. The SQL server does not compare NULL values. Hence if the NULL value is converted to some numeric/text value we can compare those records as well

3.Add appropriate comments wherever required

4.To make the SQL statements more readable, start each clause on a new line and indent when needed. Following is an example:

5.Use DISTINCT clause only in SELECT statements if there is a possibility of duplicate rows. The DISTINCT clause creates a lot of extra work for SQL server and reduces the physical resources of other SQL statements.

6.Avoid ‘NOT IN’ condition as far as possible because it offers poor performance. Instead use one of the following
b.Perform a LEFT OUTER JOIN and check for NULL condition


Runs Slower Than


7.Avoid using ORDER BY in the SELECT statements unless it is really needed because it adds a lot of extra overhead

8.UNION combines the result sets of 2 or more "select" queries. It removes duplicate rows between the various "select" statements whereas “UNION ALL” query returns all rows (even if the row exists in more than one of the "select" statements). Use UNION ALL instead of UNION when you are sure that the result sets of select queries are distinct. This prevents the UNION statement from trying to sort the data and remove duplicates, which hurts performance.

9.Avoid using SELECT *. Always write the required column names after the SELECT statement, like:


This decreases the unnecessary disk I/O

10.When there is a choice of using the IN or the EXISTS clause in SQL, prefer using the EXISTS clause, as it is usually more efficient and performs faster. Consider the following example


is less efficient than


Bonus Tip - When there is a choice of using the IN or the BETWEEN clauses in your Transact-SQL, use the BETWEEN clause, as it is much more efficient.

Disclaimer: The best practices listed are a result of my learning’s from encounters with seasoned DWBI geeks.

-Manav Ahuja

Thursday, January 12, 2012

Testabulous 2012

Wow, what a hiatus it has been. Almost an year since i last posted. With dawn of new year, atleast i hope to work on one of my resolutions - to blog as frequently as 1/month and not 1/Year.

With that promise, lets break the hiatus with a new year note to our community.

This post has been unique right from the point when I thought I need to break monotony with a new year wish to the this point when I am penning without a concrete plan. I am letting my thoughts wander and just don’t want to say “Wish you an interesting new year”.

Alright, taking a leaf from Stevie’s (Steve Jobs) presentation style, I will ask(and answer) three questions to help carve an ecstatic 2012 for the testing community we all are championing.

1.Whom are you serving daily?

My thoughts: You are serving the testing artists, who look for better thoughts, those who enjoy the better thoughts and those who think we have better thoughts.

2. Who the hell cares about the testing artists?

My thoughts: I do and daily remind myself to do that! I believe Testing is a craft…only a few respect, most others think “Anyone can do testing”

3. When testing industry has crossed $13B USD(source Internet), why still people think “It’s just freaking QA/testing(used interchangeably)” or "Anyone can do testing"?

My thoughts: I feel it’s the mindset that have existed for ages. Remember the old 70’s – 80’s (atleast that’s what I do), common perception for teaching was similar, it was considered to be the worst but easiest option available to people who could not become Engineer/Doctor/CA(in some pockets of our society though). Maybe, that upbringing is so ingrained that people have not outgrown the rusty thoughts. With premise remaining same, the teaching has just been replaced by QA/Testing in IT industry. These people who are at helm think, anyone can do QA/testing because it’s just freaking QA/testing.

I am sure, above three questions must have provoked some thoughts and maybe ignited some passion to help our community. It’s because each one of us have the authority and responsibility to help our community garner much needed respect. I hope we will see QA/testing become poster child of IT industry during our lifetime.

Wishing you and your family an un-parallel 2012.

Thanks for your time,

- Manav

Tuesday, February 08, 2011

"Freaking QA Tester"

Last week, I was talking to one of the seasoned IT executives and was educating him about the craft called – Testing. Taking a leaf from two of the experts – I follow –Michael Bolton and James Bach, I was explaining about – Confirmation Testing Vs Exploratory Testing, Defect Vs Issue etc. Seemingly uneasy at understanding the analytical, critical, logical landscape for Testing, he yells – “It is just freaking QA”. Killing my urge to reply in the same passionate yell, I just said “Oh, thanks for proving your point, I rest my case” and walked out.

Clouded with so many questions and thoughts, I am still finding it difficult to accept the narrow mindsets and preparing to challenge next time (if situation permits).

Having given myself a few days to cool off and reply appropriately, I was on usual business on Monday morning, checking and replying mails. A new mail pops up with a very provocative message. I just got blown away on the second side to the same coin

Mail from this account executive of some company, read:
I sent you an email a few weeks ago about how SFDC, Pershing, Zappos and hundreds of other quality assurance professionals are reducing test center cycle time, increasing test coverage and consistently delivering to production much higher quality releases.

Imagine if you could consistently reduce production defects by 50% while cutting test time in half. Too good to be true? If you don’t believe it, please take 3 minutes to see how has used to transform their test center operations from a “necessary evil” into a leadership organization.

If you’re challenged with delivering higher quality applications to production with less time to properly test, please contact me to discuss further and organize an online demonstration or hands-on trial. You can call me directly at (xxx)xxx-xxxx or reply to this email.

This was the second mail I received directly from this gentleman.
Two contrasting yet related incidents made me ponder, something is grossly wrong. Something needs an overhaul to help our craft gets its due respect and also transform the technological advances of generations to come.

Anyhow, I thought of questioning this guy and understand how much he understands the testing to make such lofty (read: exaggerated) claims. Here is what I asked him:

I must appreciate that your mails do lend the provocative edge to make the reader, read and not shift delete. With that said, I have a couple of very curious question to validate the claims you are making, so that I can assure myself of investing my 3 minutes for your case study.

1. How do you define Quality (when you say it’s 2x)?
2. How do you measure quality?
3. On what basis do you say that the time assigned was the ideal for testing? For what I know from my experience, the Testing NEVER stops. So what is your yardstick to the claim that you can reduce the time by ½
4. What constitutes the TEST COVERAGE?
5. Please define “Quality in Production”? In what stage of lifecycle of testing this is declared that if we achieve this / that, the testing team will deliver quality in production. Also it will be beneficial what all includes in that call out

To no surprise and very much on lines to what I could have expected I received the reply, which was same old sales s***. These are the basic questions we daily answer for our clients. Since this is what our clients demand from us, is there harm in demanding the same.

For both the seasoned IT Executive and this sales Account Executive, I pity the company who has employed such a sales person, and their gullible clients (I am sure they have many). Through these inept representatives work culture is totally on display.
On a positive side, we “Freaking QAs” can take an inspiration to help these morons and many more like these, who are thriving in our industry.

- Manav Ahuja

Monday, June 21, 2010

Towers de SOA Testing

In an attempt to understand and learn SOA, i did an extensive research and have collated my findings as below. Hope this will help the beginners like me.


As Service Oriented Architecture (SOA) begins to form the fabric of IT infrastructure, active and aggressive SOA testing has become crucial. Comprehensive Functional, Performance, Interoperability and Vulnerability Testing form the Towers of SOA Testing. Only by adopting a comprehensive testing stance, enterprises can ensure that their SOA is robust, scalable, interoperable, and secure.

Web Services have blurred the boundaries between network devices, security products, applications and other IT assets within an enterprise. Almost every IT asset now advertises its interface as a Web Services Definition Language (WSDL) interface ready for SOAP/XML messaging. Web Services interfaces provide unprecedented flexibility in integrating IT assets across internal and external corporate domains. Such flexibility makes it the responsibility of IT staff from all domains such as Developers, Network Engineers, Security & Compliance Officers, and Application QA Testers to ensure that their Web Services work as advertised across functional, performance, interoperable and security requirements.

Towers de SOA Testing

Tower I: Functional & Regression Testing

Functional and Regression Testing is the First tower of testing SOAs. IT Professionals need to quickly test Web Services and setup desired regression Test Cases. Ease-of-use in setting up such tests encourages technologist with varying skills and responsibilities to test their Web Services quickly and often.

Tower II: Performance

Performance is the Second Tower of SOA Testing. QA Testers, Network & Security Engineers should test the scalability and robustness of Web Services and determine performance and endurance characteristics of their WSDL operations. Testers should determine response times, latency, throughput profiles for target Web Services. In addition to performance profiles, tester should run test for a specified duration for measuring endurance and robustness profiles. They also need to determine scalability by bombarding target Web Services with varying SOAP messages across a range of concurrent loading clients.

Tower III: Interoperability

While loading a Web Service WSDL, consumer applications need to determine both design-time and run-time interoperability characteristics of the target Web Services. Developers should run a set of comprehensive WSI Profile tests and report interoperability issues with the Web Services WSDL. Adhering to WSI Profiles ensures that SOA assets are interoperable and that WSDL can work within heterogeneous .NET & Java environments.

Design-time WSDL interoperability testing is not enough. Run-time Interoperability testing is also necessary. Testing the interoperability of a Web Services requires creating specialized test suites for a WSDL. These tests ensure that the target Web Services are interoperable by actively sending specialized request to the Web Services and determining whether the Web Service responds per WSI Profile specification. Comprehensive design-time WSDL WSI Profile testing combined with active run-time Web Service interoperability behavior testing ensures that IT assets can integrate independent of platform, operating system, and programming language.

Tower IV: Vulnerability Assessment

Vulnerability Assessment is the Fourth Tower of SOA Testing. Active Web Services Vulnerability Assessment is an emerging area of SOA testing. By creating specialized tests for a target Web Service, security officers can measure the vulnerability profiles of the target Web Service. Security Engineers need to ensure that Web Services vulnerabilities such as buffer overflows, deeply nested nodes, recursive payloads, schema poisoning and malware traveling over SOAP messages do not affect their critical Web Services. They need the ability to rapidly scan Web Services and assess areas of exposure, determine severity levels, provide vulnerability diagnosis, and publish remediation techniques. Web Services Vulnerability Assessment is a crucial pre-production and post-production step that every .NET and Java developer and security professional must take to ensure risk mitigation within their Service Oriented Architecture.

Finally SOA – it is and it is NOT?

• Service-Oriented Architecture is an architectural strategy that helps achieve closer business-IT alignment, by taking a three-dimensional perspective of the enterprise. The three dimensions being: technology, people and processes.
• The key aspect of SOA is to make business functionality available as a set of well governed, standards based, loosely coupled services and processes, defined in a flexible and agile manner.
• SOA is an infrastructure-based architectural approach to deliver business ‘functionalities’ as ‘shared services’ by using open standards and/or protocols of communication.
• SOA is an approach that allows for implementing business ‘capabilities’ that can be consumed as services.
• SOA is not about technology specific design or architecture – it is business driven (through capabilities and functionalities/functions) for service enablement of the processes!

Reference:- Internet research

~Manav Ahuja