Sequential methods in meta-analysis
Location: D'Arcy Thompson Room, School of Computing Sciences, UEA
Date: 13:00-14:00 13 Jun 2012
Organiser: Prof. Elena Kulinskaya
Institution: School of Computing Sciences, UEA
Seqential meta-analysis or how many more trials do we need?
Abstract: The view that the results of meta-analyses should be used for design of new studies is well established. Here I explain the inherent difficulties if the random effects model (REM) of meta-analysis is to be used. Depending on heterogeneity of the existing studies, the question on how many more patients are needed cannot be answered, and a correct question is how many more studies are needed. In this talk I explain why this is the case and discuss implications for sequential meta-analysis. Examples include the famous data-set on magnesium for myocardial infarction.

Time Series Data Mining
Location: D'Arcy Thompson Room, School of Computing Sciences, UEA
Date: 13:00-14:00 15 May 2012
Speaker: Dr. Tony Bagnall
Organiser: Prof. Elena Kulinskaya
Institution: School of Computing Sciences, UEA

Sensitivity analysis for publication bias in systematic reviews
Location: EFRY 1.34
Date: 12:00-13:00 9 May 2012
Speaker: Prof John Copas
Organiser: Prof. Elena Kulinskaya
Institution: Warwick
Abstract: Publication bias is a major threat to the validity of systematic reviews. Correcting for publication bias is difficult if not impossible, but by modelling how publication bias arises as a consequence of non-random study selection we can at least see how sensitively the results of a meta analysis depend on the degree of study selection (the number of ‘missing studies`). My seminar will suggest how this can be done and how it works out in practice. The motivating example is a published systematic review in the treatment of heart disease, which suggested a strongly significant treatment effect but which was later completely contradicted by the findings of a large multi-centre randomized clinical trial.

Diagnostic accuracy reviews
Location: SCI 0.67
Date: 16:00-17:00 18 Apr 2012
Speaker: Dr Lee Hooper
Organiser: Prof Elena Kulinskaya
Institution: Norwich Medical School, UEA
Abstract: Diagnostic accuracy systematic reviews – methods, concepts and an example (signs of water-loss dehydration in older people).
Systematically reviewing studies of diagnostic accuracy involves different methods, concepts and challenges than systematically reviewing other study designs. Some of the difficulties are due to still-developing methods, but much progress has been made over the past decade. This talk will tackle some of the concepts and challenges and use an example of an ongoing cochrane review of diagnostic accuracy. The review aims to assess the diagnostic accuracy of clinical and physical signs that may be to screen for water-loss dehydration in people aged 65 years or more.

Multivariate meta-analysis and multiple treatment meta-analysis
Location: SCI 0.66
Date: 12:00-13:00 28 Mar 2012
Speaker: Prof. Ian White
Organiser: Prof. Elena Kulinskaya
Institution: Cambridge Unviersity
Abstract: Many meta-analysis problems involve combining estimates of more than one quantity: for example, combining estimates of treatment effects on two or more outcomes, or combining estimates of contrasts between three or more groups. Such problems can be tackled using multivariate meta-analysis. I will describe the multivariate random-effects meta-analysis model, how it can be fitted, and its strengths and weaknesses compared to a set of univariate meta-analyses.
I will then discuss multiple treatments meta-analysis, in which a number of treatments are compared by two-arm and multi-arm trials, and the typical aim is to determine whether evidence from different trials is consistent, and if so which treatment is the best. I will show how models expressing consistency and inconsistency can be expressed as multivariate random-effects meta-regressions. The talk will be illustrated using my Stata software, mvmeta.

Machine Learning Ensemble for Knowledge Discovery
Location: D'Arcy Thompson Room, School of Computing Sciences, UEA
Date: 13:00-14:00 14 Mar 2012
Speaker: Dr. Wenjia Wang
Organiser: Prof. Elena Kulinskaya
Institution: School of Computing Sciences, UEA
Abstract: An ensemble in the context of machine learning, can be broadly defined as a paradigm that induces multiple individual models independently from data and combines their outputs with a decision fusion function to produce a more reliable and accurate answer for a given problem.
Various ensemble methods have been developed and increasingly used in many applications but there are still some fundamental issues remaining to be addressed, including e.g. what factors affect the accuracy of an ensemble? This talk will present some recent research results and their applications.
Firstly, it will introduce the basic concepts of ensemble learning and possible relationships between two key factors: the accuracy and the diversity. Then, it will talk about some recent applications including cluster analysis feature (gene) selection, and seabed habitat identification, etc. with a focus on identifying gene-environment interactions influencing bone mineral density (osteoporosis).

On multivariate time series for counts
Location: D'Arcy Thompson Room, School of Computing Sciences, UEA
Date: 13:00-14:00 9 Mar 2012
Speaker: Dr. Dimitris Karlis
Organiser: Dr. Aristidis Nikoloulopoulos
Institution: Department of Statistics, Athens University of Economics and Business
Abstract: Non-negative integer-valued time series are often encountered in many different scientific fields, usually in the form of counts of events at consecutive time points. Such examples can be found in epidemiology, ecology, finance just to name a few. A wide variety of models appropriate for treating count time series data have been proposed in the literature mainly for the univariate case. Analysis of multivariate counting processes presents much more difficulties. In specific, the need to account for both serial and cross-correlation complicates model specification, estimation and inference. Many of the models that have been built for count time series data are based on the thinning operator of Steutel and van Harn (1979). The model in its simplest form. i.e. the first order integer valued autoregressive model (INAR(1)), was introduced by McKenzie (1985) and Al-Osh and Alzaid (1987).
In this talk, extensions to the multi-dimensional space will be discussed and the basic statistical properties will be examined. To help the exposition special care will be given to the bivariate case. The multivariate case has certain challenges especially as far as estimation is concerned. Such estimation problems do not arise in the bivariate case where estimation can be achieved using either the maximum likelihood approach or the method of Yule-Walker. Extensions to incorporate covariate information are also discussed while emphasis is placed on models with multivariate Poisson and multivariate negative binomial innovations. Real data problems are used to illustrate the model. An actuarial application will be also discussed.

Investigating inconsistencies in mixed treatment analysis: a case study
Location: EFRY 1.34
Date: 12:00-13:00 29 Feb 2012
Speaker: Dr Asmaa Abdelhamid (NMS)
Organiser: Prof Elena Kulinskaya
Institution: Norwich Medical School, UEA
Abstract: The use of indirect and mixed treatment comparison methods (ITC & MTC) to compare the ever increasing number of competing clinical interventions is becoming gradually more acceptable. The understanding about factors associated with the validity of the methods is hampered due to very limited and often conflicting empirical evidence. Although the basic assumption for a valid mixed treatment analysis is theoretically clear, practically useful methods for assessing the appropriateness of ITC and MTC have not been systematically developed and tested. We aimed to explore the factors (including results of similarity assessment and other comparison characteristics) associated with the validity of adjusted indirect comparison. From a Cochrane Systematic review that provided sufficient data for both direct and indirect comparison of two antibiotics (ciprofloxacin and rifampin) for preventing meningococcal infections, striking inconsistency was observed. The direct comparison (DC) found that prophylactic ciprofloxacin tended to be less efficacious than rifampin in the eradication of N meningitidis (OR 2.75, 95% CI 0.93 to 8.12), while the corresponding indirect comparison provided a contrasting result in favour of ciprofloxacin (OR 0.11, 95% CI 0.03 to 0.41). We closely examined the three sets of trials for baseline comparability of participants, interventions and other factors in order to identify different treatment-effect modifiers that could have affected the results. We also surveyed the review authors for their views. We found many differences that could have contributed to the discrepancy including; different doses, different inclusion criteria.
In summary, the results of the adjusted indirect comparison may not be consistent with the DC results due to imbalanced distribution of treatment-effect modifiers.

A meta analysis of the preference reversal phenomenon
Location: EFRY 1.34
Date: 12:00-13:00 18 Jan 2012
Speaker: Peter Moffatt
Organiser: Prof. Elena Kulinskaya
Institution: School of Economics, UEA
Abstract: The "preference reversal" phenomenon is the well-known tendency for experimental subjects to choose the safer of two lotteries when asked to choose between them, but to contradict this choice by attaching a higher valuation to the riskier lottery. The results from a comprehensive sample of published and unpublished studies of the reversal phenomenon are pooled and analysed. The principal objective is to estimate structural models in order to investigate the differences in behaviour between choice and valuation. In particular, we aim to estimate the attitude to risk implied by the pooled data on choice and valuations separately. A secondary objective is to consider ways in which the choice of method of eliciting valuations affects the implied degree of risk aversion. More precisely, are some methods of valuation elicitation better at extracting "true" valuations than others? This question is addressed by allowing the risk aversion estimate to depend on the elicitation method in the econometric model. We find that increasing the number of tasks brings valuations closer to choices, as does the use of the Random Lottery Incentive scheme, but that the use of the Becker-DeGroot-Marschak elicitation scheme increases the discrepancy between valuations and choices. Using the estimates from the econometric model, an algorithm is developed which can be used to predict the outcome of a preference reversal experiment.