Mi xeq: replace IPTW=1/p if vacc_status=1 // Vaccinated = 1 To control for probability of vaccination, we calculated inverse probability of treatment weights using predicted probabilities derived from multivariate logistic regression.Ĭode: mi xeq: logistic vacc_status maternal_age maternal_bmi i.mother_indigenous_status parity year_birth month_birth i.smoking_pregnancy i.socioeconomic_status Mi impute chained (logit) smoking_pregnancy (pmm, knn(10)) parity (ologit) socioeconomic_status (regress) infant_weight gestational_age maternal_bmi = i.vacc_status date_birth i.child_sex i.child_indigenous_status i.mother_indigenous_status i.plural_birth maternal_age, add(20) rseed(123) augmentĢ) Inverse probability treatment weighting (IPTW) Mi register regular vacc_status date_birth child_sex child_indigenous_status mother_indigenous_status plural_birth maternal_age Mi register imputed infant_weight gestational_age maternal_bmi parity smoking_pregnancy socioeconomic_status I have used multiple imputation to impute missing values for select covariates. I am working on a project on maternal vaccination during pregnancy and acute respiratory infections in children. Your kind comments are greatly appreciated. However, I got the error message r(3900) after a dayeven after I increase the mat size. Also, when I tried to the RESET test after the ppml_panel_sg estimation it runs forever without any output. However, it took more than 10 hours to generate the estimation. Then I tried with ppml_panel_sg, I got the results with all the values. My problem is Can I run the command without using Strict option? Ultimately, after many iterations, the output was obtained without standard errors and p values. Also, the 56 number of regressors were excluded to ensure that the estimates exist. Many warning messages saying most of the interaction variables has very large values, consider rescaling or recentering. When I run the ppml command with country and time fixed effects, clustering the standard errors with country pairs, using strict option, I got the following Also my equation includes lots of interaction variables (interact with year dummy), that were used to convert time invariant variables in to time variant variables. I'm running a gravity equation for a panel with around 50,000 country pairs for 20 years. I'm a PhD student and greatly appreciate your valuable comments on the following. I’ve tried to run geodist in the taxi data set using each geolocation for 220 fire incidents but I don’t think it is a good way as it takes too long to do it manually. Compute geodist for each row stata how to#Since incidents happened at different locations at different times, I am not entirely sure how to code it. I am trying to create a dummy variable in the taxi dataset using the fire incidence as a control group, which includes all the rides picked up within the same vicinity (like 200m) from the location of each fire incidence during the time it was happening. I've also created additional variables for month, day and hour. Both contain variables including latitude, longitude and datetime converted to SIF. Occur = read.csv( occur.data) #read in the observation data lon/latīkgd = read.csv( bkgd.data) #read in teh background position data lon.latįor ( ii in 1 :length( am working with two different datasets taxi rides (9,000,000 obs) and fire incidence (220 incidents). Populate.data = TRUE #variable to define if there is a need to generate occur & background environmental info #read in the necessary observation, background and environmental data Library( lib, character.only = T) #load the libraries Compute geodist for each row stata install#Install.packages( necessary, dep = T) #if library is not installed, install it # TODO: this should be done on demand or on user basis. Compute geodist for each row stata download#Src / org / bccvl / compute / rscripts / brt.R # set CRAN mirror in case we need to download something
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |