A photorealistic image of diverse project managers and engineers collaborating and reviewing digital network diagrams on a large screen, 30mm portrait lens, depth of field, conveying complexity and connection.

Untangling Project Webs: A Unique Dataset for Stakeholder Success

Hey there! Let’s chat about something super important in the world of big projects – you know, those massive construction gigs, infrastructure marvels, and the like. We’re talking about how people connect, who talks to whom, and how all that social stuff actually impacts whether a project sinks or swims. It’s a bit of a puzzle, right? Trying to figure out the invisible threads that hold everything together, or sometimes, pull it apart.

For ages, we’ve known that the relationships between everyone involved – the stakeholders – are a big deal. They shape how things get done, how information flows (or doesn’t!), and ultimately, the outcome. But here’s the kicker: getting solid, real-world data on these networks in complex project environments? That’s been tough. Most of the time, the data we *do* have doesn’t quite capture the messy reality of a construction site or a huge infrastructure build. It’s like trying to map a bustling city with just a sketch.

So, we decided to roll up our sleeves and do something about it. We embarked on a journey, starting back in 2019 and wrapping up in 2024, to build a dataset specifically designed to tackle this challenge. We wanted to create something that project managers could relate to, something grounded in their actual experiences.

Why Stakeholder Networks Matter

Think about any big project you’ve been involved in, even just watching from the sidelines. It’s not just about concrete and steel, is it? It’s about the client, the contractors, the suppliers, the community, the regulators – everyone! The way these groups interact, the strength of their connections, who’s central to the conversation, and who’s left out – all of this forms a complex social network.

Research has hinted for a long time that these network structures are vital for project success or failure. But proving it with hard data from the field? That’s where the gap was. Measuring these connections is tricky because people aren’t simple variables. Their behaviours, attitudes, and backgrounds are all over the place. Designing a way to collect this data reliably and validly needed some serious thought.

We looked at existing methods, like surveys used in other fields, but found they weren’t quite right for the unique world of large-scale construction projects. These projects have their own rhythm, their own set of players, and their own kind of complexity. We needed something tailored.

Building the Dataset

Our goal was to create a structured tool, a questionnaire, that could capture the nuances of stakeholder networks within these project environments. We wanted to go beyond just listing names and roles. We aimed to understand the *attributes* of the network itself, alongside factors like project complexity and how the project actually performed.

We built upon established ideas from social network analysis (SNA) but made sure it felt relevant to the folks living and breathing project delivery every day. The focus was on the connections *between groups*, the dynamics of the stakeholders, and the surrounding context and complexity throughout the project’s life. These elements are absolutely critical when you’re dealing with dynamic, spread-out work settings.

The questionnaire we designed wasn’t just a random list of questions. It was carefully structured to allow for different levels of analysis – looking at individual players (nodes) and the network as a whole. This means you can calculate cool metrics like how connected someone is (degree centrality), how close they are to everyone else (closeness centrality), or how often they sit between other people’s communication paths (betweenness centrality). We also made sure the responses could be compared across different projects.

We used a mix of question types – some where you pick from options (closed-ended) and some where you can write a bit more (open-ended). This helped us get both structured data and some rich, contextual details. Designing these kinds of questionnaires for network data also involves thinking ahead about how you’re going to crunch the numbers later. Data manipulation is key!

Our approach started with mapping out the core concepts we wanted to capture and then sequencing the questions logically. This makes it easier for the person filling it out. The questionnaire is set firmly in the project world and serves as a blueprint for how you might design similar studies. We basically mashed up ideas from project environments, social network theory, complexity science, and performance measurement into our survey design process.

By combining these core areas, this dataset offers a fantastic starting point for future research. Researchers can use it to test hypotheses about how relationships and complexity influence project outcomes, benchmark different relational patterns, and apply network measures in comparative studies.

A photorealistic image of diverse project managers and engineers collaborating and reviewing digital network diagrams on a large screen, 30mm portrait lens, depth of field, conveying complexity and connection.

What’s Inside the Data?

To really understand this dataset, you need to get a handle on some basic network lingo. Networks are made of nodes (the actors, like people or organisations) and edges or lines (the ties or relationships between them). The whole network is essentially a map of these relationships.

Our study dives into node-based attributes within stakeholder networks. We’re talking about things like:

  • Degree Centrality: How many direct connections a node has. More connections often mean more influence or access to information.
  • Closeness Centrality: How close a node is to all other nodes in the network (measured by the shortest path). Closer nodes can reach others faster.
  • Betweenness Centrality: How often a node lies on the shortest path between two other nodes. Nodes with high betweenness can control information flow.

We also looked at the strength of ties – how strong or weak a connection is. This isn’t just about who you know, but *how well* you know them or how important the connection is. Factors like time spent interacting, emotional intensity, intimacy, and reciprocal service all play a role in tie strength. Understanding this adds another layer to the network picture.

Beyond individual nodes, we also capture network-wide measures called centralisation. This tells you how concentrated the connections are across the *entire* network. A highly centralised network might have one or a few key players with lots of connections, while a low centralisation means connections are more evenly distributed.

We even touched on some classic work in this area, like that of Alex Bavelas and Harold Leavitt from way back. They did cool experiments in the 40s and 50s showing how different communication patterns in groups affect things like efficiency and leadership emergence. Their concepts of centrality and distance are still super relevant today.

How the Data Was Gathered

Collecting this kind of data from busy project managers requires a thoughtful approach. We used a structured questionnaire, broken down into sections or “Instruments” as we called them, to cover different aspects of the study.

It started with the basics:

  • Instrument 1: Getting contact info (pretty standard!).
  • Instrument 2: Project background – what kind of project was it, where, public/private, and initial thoughts on its success (time, cost, scope).
  • Instrument 3: Socio-demographics of the respondent – their industry, role, experience. This helps contextualise their perspective.

Then we got into the juicy stuff:

  • Instrument 4: Mapping professional networks. Asking respondents to list key stakeholders and classify them (client, contractor, etc.).
  • Instrument 5: Diving into network relationships. This is where we asked about the *strength* of the ties between those stakeholders, often using a scale (Very Weak to Very Strong).

After the network details, we shifted focus:

  • Instrument 6: Project Complexity. We used the CIFTER model, which breaks complexity down into lots of indicators and attributes related to stakeholders, governance, scope, location, technology, resources, and more. Respondents rated aspects like the stability of the project context.
  • Instrument 7: Objective Project Performance. Asking for quantifiable details like actual cost, schedule, and scope outcomes.
  • Instrument 8: Overall Performance and Reflection. More subjective questions about quality, satisfaction, and lessons learned, often using open-ended formats.

We used an online platform called REDCap for the questionnaire. This made it easy for project managers to access it on different devices and helped us collect data securely and efficiently. We also did a pilot test beforehand to iron out any kinks and make sure the questions were clear.

Ethical considerations were paramount. We got written consent from everyone, and crucially, we de-identified all the data. Respondent names were replaced with numbers, and stakeholder names were anonymised by focusing on their roles and representative descriptions rather than actual names. Privacy was a top priority.

We used a mixed-methods approach – combining structured questions (quantitative) with open-ended ones (qualitative). This gives you the best of both worlds: consistent data for analysis and rich context from the respondents’ own words.

When it came to picking who to survey, we used a mix of stratified and simple random sampling. Stratified means we divided the potential respondents into groups based on certain criteria before randomly selecting from those groups. We also used a little bit of snowball sampling – where initial participants helped us find others – which is handy for reaching people in specific networks.

An expansive view of a massive, complex infrastructure construction project, wide-angle lens 15mm, long exposure for smooth clouds, realistic.

Diving into the Analysis

Once the data was collected (over 100 responses, after cleaning!), it was organised into four main groups: basic project info, social network data, complexity factors, and performance measures.

Then came the fun part: the analysis! We used a bunch of different tools to crunch the numbers and visualise the networks.

  • Excel: For initial sorting, cleaning, and some basic calculations.
  • Gephi: A fantastic tool specifically for visualising and analysing networks. This is where we calculated things like graph density, average degree, and clustering coefficients.
  • SPSS, PNet, and Python: Used for further statistical analysis, modelling, and refining the analytical approaches.

For example, we showed how the network data for a single respondent (Respondent 153) could be transcribed into a spreadsheet and then imported into Gephi to calculate metrics and visualise the connections, with line thickness showing tie strength. Pretty neat way to see relationships mapped out!

The dataset itself is publicly available for anyone interested in digging in. You can find it at the Harvard Dataverse repository. It includes the full questionnaire PDF and two Excel files: one with the detailed network data (nodes, types, tie strength for each respondent) and another with the project background, complexity factors, performance metrics, and other details. We even added comments in the Excel headers to explain how some of the calculated complexity measures were derived.

We put a lot of thought into making sure the questionnaire was both valid and reliable. Validity means it actually measures what it’s supposed to measure. We focused on “context-based validity,” ensuring the questions aligned perfectly with the research area (project management and SNA). Reliability means you’d get consistent results if you administered it again. We checked for “internal consistency reliability” using something called Cronbach’s alpha, which looks at how well different questions measuring the same thing hang together.

We also tried hard to make it easy for busy project managers to complete. We broke it into sections, used mostly closed-ended questions, estimated the completion time, and allowed them to save and come back later. Little things that make a big difference when you’re asking for people’s valuable time!

A close-up view of a computer screen displaying complex network analysis graphs and data visualizations, 60mm macro lens, high detail, precise focusing, controlled lighting, realistic.

It’s Not Perfect (But It’s Valuable)

Look, no research is ever completely without its bumps. We know there are limitations. Human error is always a possibility, no matter how careful you are. And figuring out direct cause-and-effect relationships from survey data can be tricky – correlation doesn’t always equal causation! Also, sometimes the way you phrase a question can influence the answer, or you might miss asking about something important.

Despite these inherent challenges, we designed this study specifically to collect data that is as reliable and valid as possible. We wanted to shed light on how networks, complexity, and performance are linked, especially in the increasingly complex world of large projects.

There’s plenty of general advice out there on designing surveys, but not a lot specifically tailored for combining project management and social network analysis. That’s where we hope this research makes a difference. We’ve put together a specific approach, combining unique instruments, survey techniques, and theoretical ideas for design and interpretation.

The core of this questionnaire lies in applying social network analysis and looking at project complexity. Instruments 4 and 5, focusing on the strength of ties within networks, are particularly key to testing the main ideas of the study.

Ultimately, this dataset is a resource. It’s there for researchers and practitioners who want to dig deeper into the fascinating interplay between who you know, how complex your project is, and whether you hit your goals. We think it provides a solid foundation for understanding the “social side” of project success.

Hands interacting with a tablet displaying an online research questionnaire, 35mm portrait lens, depth of field, realistic.

Source: Springer

Articoli correlati

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *