Monday, April 22,
Facts and measures are the foundation of true understanding, but misuse of metrics is the cause of much confusion. How can we use metrics to manage testing? What metrics can we use to measure the test process? What metrics can we use to measure our progress in testing a project? What do metrics tell us about the quality of the product? In this workshop, Rex will share some things he’s learned about metrics that you can put to work right away, and you’ll work on some practical exercises to develop metrics for your testing. In addition, Rex will walk you through a case study of an actual testing dashboard in use to manage very large, high-risk projects at an RBCS client.
- Understand the relationship between objectives and metrics.
- For a given objective, create one or more metrics and set goals for those metrics.
- Understand the use of metrics for process, project, and product measurement.
- Create metrics to measure effectiveness, efficiency, and stakeholder satisfaction for a test process.
- Create metrics to measure effectiveness, efficiency, and stakeholder satisfaction for a test project.
- Create metrics to measure effectiveness, efficiency, and stakeholder satisfaction for a product being tested.
With a quarter-century of software and systems engineering
experience, Rex Black is President of RBCS (www.rbcs-us.com), a leader
in software, hardware, and systems testing. As the leader of RBCS, Rex
is the most prolific author practicing in the field of software testing
today. His popular first book, Managing the Testing Process,
has sold over 40,000 copies around the world, including Japanese,
Chinese, and Indian releases, and is now in its third edition. He has
written over 30 articles, presented hundreds of papers, workshops, and
seminars, and given about 50 keynotes and other speeches at conferences
and events around the world. Rex is the immediate past President of the
International Software Testing Qualifications Board and of the American
Software Testing Qualifications Board.
Principal Consultant, Mentora Group
Senior Performance Engineer, Mentora Group, Inc
Monday, April 22,
Generating results that can be acted on to remediate performance bottlenecks takes much more than just mastering a load testing tool and running tests that simulate hundreds of users. While of course defining the objectives and the test requirements is the critical starting point, we’ve found that it helps knowing what the results we’re seeking should look like. Having key graphs in mind that depict scalability, capacity, throughput and bottlenecks helps us design effective tests and capture the relevant metrics.
In this class, we will start with results: what they look like, how do we derive insight from them, and where they come from. As we focus on identifying patterns in results that indicate information we are testing for, we will describe how to generate, collect, correlate, and interpret actionable information before and during performance tests. And ultimately we will understand how to plan and design effective tests.
The instructors (and volunteer workshop contributors), will share from hundreds of performance projects the context and results that will bring to life a framework for engaging in performance testing, and delivering actionable results.
Come to this tutorial to become a better performance test planner, a skillful interpreter of results, and a valued member of the delivery team – and bring your own project challenges and results so can collectively discuss them!
Dan Downing is co-founder and Principal Consultant at Mentora Group, Inc. (www.mentora.com) , a testing and managed hosting company. Dan is the author of the 5-Steps of Load Testing, which he taught at Mercury Education Centers, and of numerous presentations, white papers and articles on performance testing. He teaches load testing and over the past 13 years has led hundreds of performance projects on applications ranging from eCommerce to ERP and companies ranging from startups to global enterprises. He is a regular presenter at STAR, HP Software Universe, Software Test Professionals conferences, and is one of the organizers of the Workshop on Performance and Reliability (WOPR).
Eric Proegler is a Senior Performance Engineer at Mentora Group, Inc. (www.mentora.com), a testing and managed hosting company. Before that, he led performance engineering for a software vendor, worked in software testing, and consulted on performance testing, hardware sizing, and other technologies of interest in engineering and deploying software solutions.
President & Principal Consultant, RGCG, LLC
Monday, April 22,
The move from traditional tester to agile tester can be Extreme (pun intended). There are a wide variety of new skills that need to be acquired. But there are also established techniques that need to be re-honed or adapted as well. Beyond the specific skills however is a larger and more fundamental change—as the very mind of the agile tester is different!
You must move from independent tester to quality advocate. You can’t simply verify requirements, but need to elicit and evolve them with your business stakeholders. If you are primarily a functional or manual tester, you’ll need to significantly broaden your skills across all areas of software testing. In addition, your "courage" needs to change with respect to organizational and team dynamics. You can’t simply be a wallflower who try’s to “test in quality”. In a word, everything shifts or changes.
Join experienced agile coach & tester Bob Galen as we explore the key skill areas, both hard and soft, that you'll need to adjust in order to survive and thrive within agile teams.
- Explore specific technical testing skill adjustments for agile teams
- Discuss the adjustments that are required in agile automation and other tooling
- Establish the core adjustments required for testing planning & reporting within an agile context
- Explore writing testable agile requirements (User Stories) and effective ATDD – Acceptance Test-Driven Development approaches.
Bob Galen is an Agile Methodologist, Practitioner & Coach based in Cary, NC. In this role he helps guide companies and teams in their pragmatic adoption and organizational shift towards Scrum and other Agile methods and practices. He is currently President & Principal Consultant at RGCG, LLC. He is also Director of Agile Solutions for Zenergy Technologies where he applies his experience helping clients accelerate their agile adoption. Bob regularly speaks at international conferences and professional groups on topics related to software development, project management, software testing and team leadership. He is a Certified Scrum Coach (CSC), Certified Scrum Product Owner (CSPO), and an active member of the Agile Alliance & Scrum Alliance. In 2009 he published the book Scrum Product Ownership – Balancing Value from the Inside Out. The book addresses the gap in guidance towards effective agile product management. You can find the book here - http://goo.gl/mlYHF. Bob may be reached directly at – firstname.lastname@example.org or email@example.com or
Professor of Software Engineering, Florida Institute of Technology
Dr. Rebecca Fiedler
President, Kaner Fiedler Associates LLC.
Monday, April 22,
Many bugs are quite simple and we can find them using relatively simple techniques. For example, in the black box world, the most common family of techniques is "domain testing" (boundary and equivalence class analysis for one or a few variables). In the glass box world, the most common techniques help us achieve high levels of structural or data coverage. Techniques like these are blind to deeper bugs, failures that surface when you work the application harder. These include failures that involve timing or intermittent memory corruption (to hunt these, we use high-volume techniques) and design weaknesses that frustrate the experienced user who is trying to complete an appropriate but not-necessarily-everyday task. To hunt design weaknesses, we use scenarios that are based on a deeper analysis of the user/software/hardware/environment system. For that type of analysis, we use qualitative research methods, guided largely by the CHAT (cultural-historical activity theory) framework. This is frequently used in human factors research and in requirements analysis for many types of products and services. This tutorial will introduce you to using CHAT to guide requirements analyses that support scenario test design.
Cem Kaner J.D., Ph.D., is a Professor of Software Engineering at Florida Institute of Technology, and the Director of Florida Tech's Center for Software Testing Education & Research (CSTER) since 2004. He is perhaps best known outside academia as an advocate of software usability and software testing.
Prior to his professorship, Kaner worked in the software industry beginning in 1983 in Silicon Valley "as a tester, programmer, tech writer, software development manager, product development director, and independent software development consultant." In 1988, he and his co-authors Jack Falk and Hung Quoc Nguyen published what became, at the time, "the best selling book on software testing," Testing Computer Software. He has also worked as a user interface designer. In 2004 he cofounded the non-profit Association for Software Testing, where he serves as the Vice-President for Publications.
For the past 25 year years, Rebecca Fiedler has been teaching students of all ages – from Kindergarten to University. In the testing community, Dr. Fiedler works with Cem Kaner on the Black Box Software Testing (BBST) online professional development courses and was an active volunteer with the Association for Software Testing's Education SIG. She is a regular attendee and presenter at the Workshop on Teaching Software Testing and has had numerous presentations at national and international conferences in education and educational technology.
Dr. Fiedler's dissertation research used social science methods to evaluate software as used by real users for high-stakes tasks in education administration. She has also taught research methods classes to university students and advised doctoral students conducting qualitative studies.