State Representation

Notice, Wonder, Connect
Slow Reveal

This bar graph focuses on state legislators. It shows the ratios of population to state representative and to state senator.  How do the ratios in Alaska compare to those in other states or the national average?  What factors might influence how these ratios vary from state to state?

As seen in this graph, the proportion of population to legislators tends to increase as the population size of the state increases (California has the largest population and the largest proportion) but this is not always the case. Alaska’s population is the third smallest (after Wyoming and Vermont), but its proportion of population to senators is the 7th smallest and its proportion of population to representatives is 10th smallest.  

The United States has a federalist government. This means the powers are shared between a federal government and a state government. While we often spend time talking about the federal government, the state government plays an equal (and in many ways, greater) role in your day-to-day life. Like the federal government, most states have a legislature with two chambers (called a bicameral legislature). The one exception is Nebraska which only has state senators and no state representatives (called a unicameral legislature).

Both the federal government and state governments have to find a balance when deciding the size of their legislative bodies. A larger body means more representation and potentially more different voices heard but also could mean too many people talking, more money spent on salaries, and more elected officials to keep track of. Each state decides how it wants to handle this issue, but, generally, as the population of a state increases, so does the number of representatives it has. That rise is usually not linear. As the population increases, the number of representatives normally increases at a slower and slower rate until it stops altogether. This means more people being represented by a single legislator. We can look at the proportion of the number of people per representative to see how large an average district is within a state.

The writers of the U.S. The Constitution believed a ratio of 30,000 people per 1 representative was the proper balance for federal representation. Now, that ratio is over 25 times larger.  The United States was much more rural at its founding and so 30,000 people covered a much larger geographic area than now. Also, communication technology was much less advanced, so larger areas had less frequent communication than they do now. Taking those factors into consideration, what do you think of the current ratio for federal representation?  Is 30,000:1 still the proper balance for federal representation?  What about for state legislatures?

What other solutions do you think would help balance representation and the size of the legislature?

Data Source: https://ballotpedia.org/Population_represented_by_state_legislators

Reproduce the graph yourself or pick different states using the spreadsheet below.

Additional Resources

Arctic Winter Games

Slow Reveal
Notice, Wonder, Connect

This data shows the medal tallies for the 2023 Arctic Winter Games in a pictographic format.  Complete listings are here.  There’s a wide range in the number of medals won by each team, from close to 25 total to close to 165 (numbers are approximate because of rounding).

There are a variety of possible reasons for why some teams win more medals:

  • Some teams are bigger than other teams
    • The population pools are larger for some teams
    • There are more than 730,000 people in Alaska and fewer than 14,500 in Nunavik.
  • Distance from home to location of the Games is closer and it’s easier for more athletes to travel
  • Some teams can afford to bring more athletes because of the cost of:
    • Transportation to the Games
    • Uniforms, equipment
    • Food and lodging during the Games
  • Athletes from some teams are better prepared in some sports
    • Training knowledge and experience is greater in some some communities than in others
    • On-going financial support for all or some sports varies and impacts:
      • What kinds of sports facilities, training, equipment are available back home, when, and for whom
  • The number of events in different sports varies considerably.
    • There are two or three events in most team sports such as Basketball, Curling or Volleyball.  In contrast, there are multiple events in individual sports such as Arctic Sports (35), Wrestling (25), Dene (24) and Cross Country Skiing (24).
ContingentPopulation 
Alaska733,583
Northern Alberta386,000 (2011)
Sápmi(Sámi people of Sápmi region; northern Scandinavia)Unknown.  Estimates range from 50,000-135,000
Greenland56,562
Northwest Territories45,605
Yukon43,789
Nunavut40,526
Nunavik14,405 (2021)

The contingent with the largest population, Alaska, took home the second most medals.  The team with the most medals, Yukon, is the contingent with the third smallest population.  

Additional Resources:

Alaska Well-represented at 2023 Arctic Winter Games in Canada (Alaska News Source)

Juneau Athletes Represent at First Arctic Winter Games Since 2018 (KTOO)

Arctic Winter Games Catalog

Interactive Medal Standings

Peer Climate

Student suggestions for headlines: “The Truth Behind School,” “Students Know Students Better Than Staff”, “The Aftermath”

Slow Reveal

Notice/Wonder/Connect

This graph compares how students and staff across Alaska perceive peer climate among students, grades 6-12.  Staff perceptions have been consistently 20 percentage points higher than students’, including when they both rose 9 percentage points during the first full year of Covid.  What might explain the difference in perception?

Background

These questions are part of the School Climate and Connectedness Survey (SCCS) in which 32 (of 54) Alaskan school districts participate annually between January and March.  According to AASB’s handout about the survey, it was developed by the American Institutes for Research (AIR) in partnership with the Association of Alaska School Boards (AASB). It is designed to measure positive school climate, how connected students feel to adults and peers, social and emotional learning (SEL) ,and observed risk behaviors at school or school events.  There are two student surveys (grades 3-5 and grades 6-12), one staff survey, and one family survey available to school districts.  The collection and analysis of data is carried out by AASB.  

Participation is not mandatory and districts pay for the service.  Districts who choose not to participate (such as, for instance, Fairbanks), often choose to conduct a similar review of the social and emotional welfare of their students, families and staff.  This data represents about 3/5 of Alaskan school districts; those districts, in turn, account for more than 3/4 of the students in Alaska.  On average, in each district that participated, about 1/3 of students took the survey, and in each school that participated, about 2/3 of students took the survey (some schools within participating districts did not take the survey). For more detailed numbers, see the slides.  For more information about the survey, its goals, and methodology, click here.  The complete data can be accessed here at this statewide public link. 

Note: Higher “scores” – about any question – reflect more favorable perceptions of peer climate. 

This graph

The particular data in this graph about peer climate comes from two sets of questions that students and staff were each asked about how respectful and helpful students are to one another.  In addition, the staff set also included questions about how respectful students were to teachers.  

Digging into the data

As always, data often prompts more questions even as it answers some.  

What might explain the 32 point difference between staff and student ratings?

AASB suggested some possibilities:

  • staff my not know about things happening at school between classes
  • staff may not know about interactions students are having online/ via text or phone
  • staff generally rate things higher than students.  While this was suggested, the actual evidence shows that this is NOT TRUE)

Perhaps the actual questions asked of each group make a difference in the ratings. Both staff and students perceive staff/student relationships to be more respectful than student/student relationships, but only the staff set includes a question about staff/student relationships.  

  • In digging into the specific questions that staff were asked, staff more often scored questions about peer to peer culture lower (51%, 59%, 73%) than peer to staff culture (65%, 72%).  If only peer to peer culture were iåncluded, the overall rating by staff would be lower (61%, rather than 64%). 
  • Similarly, in questions in other sections of the survey, staff and students both rate, “Teachers and students treat each other with respect in this school” very highly at 83% and 73%, respectively.

How much does who is responding, within each group, affect the difference in perception?  Note: this document has not done any analysis of how different groups respond to specific questions, only how they respond to the overall set of questions.  There are probably interesting observations at that level of specificity – and some that might address some of the questions listed below.  Remember that the full results for the state are available here.

Where is the variability among student groups? Any ideas why? 

There is the most variability among demographic groupings in “Difficulty getting basic things” (a difference of 11% from most to least likely to report favorably) and in “Grades” (a difference of 10%).  Race/Ethnicity shows the least variability (a difference of 7%). In general, there’s a correlation between students in groups who are traditionally marginalized (lower grades, lower SES, younger, non-male) being less likely to rate peer climate favorably compared to those in non marginalized groups, but that is not as indicative with race/ethnicity. 

  • Difficulty getting basic things for family
    • The more likely a student is to report no difficulty, the more positively (33%) they rate peer climate compared to those who do report difficulty (22%).  Are students who are already struggling with basic needs at home treated worse by fellow students or are they more aware of and sensitive to how fellow students are treated by other students?  Or both?
  • Grades
    • Students are more likely to perceive peer climate more favorably when they have higher grades (A’s are 34%, mostly D’s and F’s are 24%)
  • Gender identification
    • Students who identify as male are the most likely to perceive peer climate favorably (34%), in contrast to females (30%) and those who prefer not to answer (25%) or identify in a different way (24%).
  • Grade Level
    • 12th graders report the most favorable ratings (36%); 8th graders report the least favorable ratings (27%).  
    • In general, as students enter high school and prepare to graduate, their perceptions of peer climate seem to improve.  Do we know, though, what happened to the 8th graders who reported negatively?  Did they drop out of school?  Did their perceptions change over time?
  • Skipping School
    • Students who don’t skip school were more likely (34%) to give favorable ratings to peer climate than those who do skip school (26%).  Does that reflect that students who have and see less positive relationships with peers have more reason to leave school – or that those who are already leaving school face more negative experiences when they are in school?
  • Learning Model
    • Similar to the bump in 2021, students who were doing school in 2022 mostly via distance were more likely (39%) to give favorable ratings to peer climate than those doing school in-person (31%). 
  • Race/Ethnicity
    • American Indian (35%), white (34%) and Alaska Native (33%) students were most likely to rate peer climate favorably; students who identified as Black/African American, or two or more races were least likely to rate peer climate favorably (28% and 29%). 
  • Speaking a language other than English at home
    • Note: the difference between those who did and did not speak a second language at home was only 2 percentage points, so we did not include the details in the slides or here. 
  • When students are considering the SCCS peer climate questions, how much are they rating their own personal interactions with other students and how much are they rating how they see other students treating other students?  Would those answers be different?

Where is the variability in staff groups?

  • School Role
    • Among staff respondents, administrators (72%) were far more likely to say that the peer climate is positive than were classified and certified staff (61% and 64%).  Are administrators commenting on a different reality because students are more careful about how they act in front of them? 
  • Gender Identity
    • Similarly, staff who identify as male or female gave higher ratings (65%) to peer climate than those who prefer not to say or who do not identify in that way (47% and 50%, respectively).  Does that difference reflect how staff perceive themselves to be treated, how much more aware some teachers are than others to peer culture, or how students act differently in front of different teachers?
  • Race/Ethnicity
    • Asian (74%) and Native Hawaiian (70%) were most likely to perceive peer climate favorably and Two or more Races (57%) and Alaska Native (58%) were least likely to perceive peer climate favorably.  These high and low ratings were more spread out than and differed from student ethnographic ratings.
  • Length of Time in District
    • Teachers who’d worked the longest in the district (more than 15 years) were somewhat more likely than other teachers to give a favorable rating (68%) than others (lowest were teachers of 6-10 years, 62% of whom gave a favorable rating.)  Is that a significant variation?  What might explain it?  Are more experienced teachers more capable of creating environments where students treat each other and the teacher better?  

What might explain the bump in favorability in 2021?

The bump in favorability in 2021 reflects the year of teaching during Covid.  (Remember that schools started closing from Covid in March of 2020, after the SCCS was conducted for the year.)  Does the increase reflect a change in how the survey was conducted that year (i.e., who participated) and/or how students interacted? 

  • During 2021, many schools varied their teaching delivery during the year – sometimes being completely online, sometimes offering a hybrid option, sometimes being in person. (Note: these varied teaching deliveries continued in 2022, as noted in the student demographic descriptions). 
  • In addition, some schools adjusted their schedules so that, for instance, classes were only held 4 days a week and the 5th day was for catch up. 
  • In person attendance was much lower that year; did students treat each other better because there were fewer of them in classes? 
  • By all accounts, teachers were aware that students were struggling emotionally and socially that year; did teachers make changes to classroom environments (virtual or in-person) that improved peer culture?  
  • What else might account for the favorability increase and why did it return exactly to the previous year’s level when school became (more) in-person again?  

What questions might you have about how representative this data is? What might AASB or schools do to make this survey and its results even more representative and useful?

The complete data shows (though not in these slides) how many respondents were from each student and staff demographic category and what those corresponding percentages of all respondents were.  It does not show, though, how representative the respondents were of the district or state population as a whole.  We know that 3/5 of Alaskan school districts participated and that, within those districts overall about 2/3 of students in participating schools took the survey.  We don’t know, though, whether there were some grades/genders etc that were under or overrepresented.  AASB is working on this analysis, but it is not currently available.

How might this data be used?

We look forward to hearing student suggestions.  Below are some examples of how organizations have used this data so far:

  • School boards use the results from the SCCS survey to set district priorities
  • Some schools look at the survey results as a staff and then use it to set goals and make changes at a school-level
  • AASB uses SCCS to measure progress on its goals as an organization, to apply for grants and to measure progress on those grant goals.
  • Various other entities – such as the Alaska Department of Education – use the SCCS in a variety of ways.  Some, for instance, link school climate to academic achievement or teacher retention, or they look at the impact of programs such as the trauma engaged schools work.

Written by Brenda Taylor, with substantial use of “About Alaska’s School Climate and Connectedness Survey” by AASB and with advice from Lauren Havens and Kami Moore of AASB. 

Data Source: Alaska School Climate and Connectedness Survey, 2022

Visualization Source: Alaska Association of School Boards

Additional Resources:

School Climate and Connectedness Survey results, statewide: http://bit.ly/anchoragepublic

“About Alaska’s School Climate and Connectedness Survey” (Alaska Association of School Boards) https://panorama-www.s3.amazonaws.com/clients/alaska/2022/22_About_SCCS_FINAL.pdf

Inflation and Education Funding

Slow Reveal

Notice/Wonder/Connect

The top graph shows a comparison between Alaska’s K-12 yearly public school funding and the significantly decreased value of that funding due to inflation over the last eleven years, from 2012 to 2023. How should the State of Alaska decide how much money to distribute to the schools for education?

Background on Education Funding:

Determining the Base Student Allocation:

The yearly funding is based on the BSA (Base Student Allocation; a dollar amount per student), which is established in a law, voted upon by the Alaska State Legislature.  That BSA amount is then multiplied by the AADM (Adjusted Average Daily Membership) to determine the amount for each school district; those numbers are all added together to determine Basic Need to determine the total education funding provided.  The AADM is determined by the actual number of students (Average Daily Membership, or ADM) adjusted by several factors such as size of schools, cost of living in different districts, and additional costs for special education. The actual sources for the Basic Need funding is a combination of required local funding from municipal school districts, deductible federal impact aid (federal funds to, among other things, offset lands that are exempt from local property taxes), and State funds.  There are also some additional state and federal funds, described below.  

Other Education Funding in Alaska

Every year there is also a formula for funding transportation for each district.  Slide 16 shows the match between funding allocated and actual transportation costs since 2013.  In general, the allocated funds are less than the actual costs (fuel and staffing, primarily) which has meant that districts have had to use some of their BSA funds to pay for transportation.

Some years – as seen on the original graph (slide 8) there is additional one-time funding provided by the state that it is outside of and in addition to the BSA formula funding. That funding when reported as a lump sum sounds large, but when divided out per student (as in slide 11) shows up as a useful but not very significant increase for each school district. 

There are additional funds coming into school districts directly via Title I (federal) funding, local municipal funding, and district or school generated grants and fundraising.

Inflation and the BSA

The BSA has not been increased since 2016 (i.e., FY2017), so education funding has not kept up with inflation. The question being hotly debated now in the legislature is how much to increase the BSA for Fiscal Year 2023-24 (known as FY24; synonymous with school year 2023-24). Districts across the state are reporting struggles as their costs – such as fuel, classroom materials, and insurance, – rise, but their funds remain constant.  Some districts have closed schools and “many have cut staffing and services, increasing the number of students in each classroom.” (Alaska Beacon) In rural schools, funding deficits have a particularly large impact; schools cannot pay for needed repairs and, for instance, have to manage without water or close!

A wide range of remedies – and corresponding legislative bills – are currently being suggested. They vary widely in their approaches, including how much to increase the BSA, how to pay for it, how to plan for future inflation, and whether or how to include “accountability.”   Last year, a $30 increase to the BSA was voted in to begin in FY 24. Additional suggestions and/or bills for FY24 range from $0 (from the governor) to $860 to $1000 to $1250.  Refer to the resources list for more details.

Data Considerations and Visualization Choices

Visualization Source: https://www.akleg.gov/basis/get_documents.asp?session=33&docid=225

The creator of the “pencil graph” is the Alaska Council of School Administrators (ACSA), using data from Alaska Legislative Finance.  ACSA represents school administrators and is working, among other efforts, to convince legislators to increase the BSA. ACSA made choices, in creating its graph, to emphasize the declining value of the BSA due to inflation over time, i.e., to make the inflation line look steep and dramatic.  By contrast, the graph above, created by the Alaska Legislative Finance itself, made choices that show the decline, but do not make it seem as severe. 

Pencil Graph by ACSA (slide 4)Graph by AK Leg Finance (slide 10)
ScaleEvery line = $500Every space = $1000
Y axis start point$3000$0
X axis start pointFY12FY14
X axis end pointFY23FY24 (proposed by Governor)
Inflation reference yearFY12 FY22
Difference in BSA value (adjusted for inflation)$1154 in FY12 dollars $1043 in FY22 dollars

Making Comparisons about Education Costs 

Comparing the cost of education within Alaska and between Alaska and other states is not simple and is, sometimes, political. Alaska generally ranks among the top ten in total dollars spent per student (around $17,000), however when the cost of living is factored in, Alaska ranks much lower.   The Institute of Social and Economic Research at the University of Alaska Anchorage determines cost of living differentials among Alaska school districts; they also compare Alaska to the rest of the US.  For that report, see here. Specifically, costs that are much higher in Alaska than other states and also much higher in Alaskan villages than in hubs are: health care for staff, energy, and the preponderance of small schools (which can’t benefit from efficiencies of scale and have more frequent and costly staff turnover. In addition, small schools in villages are disproportionately affected by climate change and climate or other hazard events.)  ISER concluded that “… that by 2019, even though Alaska pays more than any other state on a per-student basis, the cost of living here is so high that once that factor is included, public schools here received less money than the national average.”  (Alaska Beacon

Written by Brenda Taylor, with frequent references to publications from Alaska Council of School Superintendents, Alaska Legislative Finance, and the Alaska Beacon.

Additional Resources:

Data Source: Legislative Finance

Visualization Source: https://alaskaacsa.org/wp-content/uploads/2023-JPS-Layout-final.pdf

Human Migration

Slow Reveal
Notice, Connect, Wonder

Look at the speaker notes and beginning slides for additional teacher guidance.

This graph shows how net migration (whether and how many more people have moved in or out) has changed depending on age, time and region of Alaska. The volume of colors and lines can feel quite overwhelming, but there are some clear “big picture” trends and differences to make note of.   Mat-Su is the only region that has been growing through all 3 decades.  During that same time period, the more geographically distant and less populous regions (Northern, Southwest, and Southeast, Interior and Gulf Coast minus their population centers), and the other population centers (Kenai, Anchorage, Fairbanks and Juneau) have experienced the most consistent outmigration patterns.  The population centers have experienced more varied migration – both in and out.  Many regions showed a slight bump in in-migration in the late 2000’s which is when the rest of the US was in the midst of the “Great Recession” (beginning in Dec of 2007) and so people from the lower 48 states came to Alaska seeking to jobs; the recession did come to Alaska a few years later

Among the trends that are harder to see in these graphs, but significant, are that younger people (20’s) move more frequently than those older or younger.  That’s not surprising as they are more likely to be moving for jobs and education and to not have reasons to stay put.  

Mat-Su’s consistent net in-migration reflects its proximity to Anchorage (for jobs) and its cheaper housing.  The wide swings in Anchorage’s population reflect the influx of young people in the early 2000’s and then the impact of the statewide recession in the late 2010s.  

In contrast, Kenai has attracted people with young families and those over 60, while 20 somethings have steadily been leaving the region. 

A few things to note about these graphs are constructed:

  • The y-axis scales differ greatly from graph to graph.  The graphic designers decided to emphasize how different the types of changes were (in/out, relatively small/large), rather than focus visually on the actual numbers.
  • These graphs don’t give an indication of how the overall population of Alaska was changing during this time. 
  • The graphs do not take into account how the actual population of these regions of Alaska have changed, i.e., they are not accounting for natural increase (how the population changes because of births and deaths)
  • The data is shown in rolling 5 year intervals; generally that makes it easier to see overall trends.
  • The economic regions are designated by the Alaska Department of Labor which bases them on 2013 census and borough geography.

Pages 11-15 of the full report from the Department of Labor gives excellent clear summaries and explanations of each region’s changes. Additional pages and graphs examine highlight other important trends and their causes.

Additional Resources:

Visualization Type: Stacked Bar Graph

Data Source:

Visualization Source:

Representation in STEM Classes

Slow Reveal
Notice, Connect, Wonder

This data shows the demographic makeup of some of the STEM classes at Juneau-Douglas High School: Yadaa.at Kalé in Juneau, Alaska, for the school year 2017-2018.  It shows that the percentages of specific races/ethnicities in STEM classes are frequently very different from the percentages in the school as a whole. (Note, this table shows a few select classes.  The analysis below refers both to the data in this table and to the data from the complete table, which is available here.) More specifically, it is most frequently the white students who are overrepresented in the STEM classes and the other groups that are somewhat or very underrepresented; for instance, there are 0 Alaska Native students in calculus or engineering, even though they make up 15% of the school population.  Why is this like this and does it matter?  

It matters because anytime there’s a big discrepancy between school-wide and class-specific enrollment, it’s worth trying to understand why it’s happening and what potential consequences there might be.  The why is complex and some possibilities are suggested below. The consequences in this case are dramatic; taking STEM classes correlates with majoring in STEM subjects in college which is, in turn, necessary for most STEM occupations.  Currently, STEM occupations are generally both more highly paid than other occupations and less representative of the general population. That means that the work done in those fields does not benefit from a diversity of experience and point of view and so work is done “for” or “about” people by other people who do not have the relevant and necessary background (e.g., male engineers making products for females). Furthermore, the wealth of STEM fields is concentrated in certain groups (especially white men).  For example, as slide 18 shows, on the whole, white and Asian men in STEM jobs make considerably more than men of other races or women – so much so that Asian men are likely to make nearly twice as much as, for instance, Hispanic women.  

Some of the additional slides show the differential representation in STEM classes of other groups: male and female (no other identification of gender was available then), socioeconomic status (through free and reduced lunch), and of 9th grade math classes. At JDHS: Yadaa.at Kalé, females are generally overrepresented in the natural sciences and underrepresented in physical sciences, technology, and the highest level math courses. The greatest gender differences are in Principles of Engineering (5% female) and Intro to Health Sciences (91% female), which, not surprisingly, corresponds with the differences in which STEM occupational groups women are overrepresented (74% in health-related) or underrepresented (15% in engineering).  Free and Reduced Lunch representation is egregiously low or nonexistent throughout all but one STEM class (one year of Oceanography when it was a few percentage points higher than the general population).  And, there are also very sharp distinctions visible in the math preparation for different classes.  Enrolling in the STEM classes which are often a prerequisite for college STEM classes (Physics, Chemistry, upper level math, upper level biology) is highly correlated with having already completed algebra in 8th grade, prior to high school. (Remember, to see the complete data, go here.)

Why are white students so overrepresented in STEM classes and free and reduced lunch students so under enrolled?  There are no simple answers, but lots of pieces.  What level a student came to 9th grade is clearly indicative.  How is it that white students are so over represented in 8th grade algebra?  (slide 17)  They are also overrepresented in Gifted & Talented education – the testing for which is usually done in 3rd grade.  It’s been known for a long time that the testing for G&T is unfair, but no substantial changes have been made.  Participation in math classes is determined by teachers – and by some parents who may influence class assignments by paying for tutors and/or lobbying the administration for specific classes. In addition, some parents are able to pay for their children to take supplemental online courses to advance them along the math levels. Another factor is that math is often a class where homework is emphasized and where different families are more or less able to support their children with their homework. Students whose families do not have extra funds or are not themselves confident in math are at a disadvantage in being in more advanced math classes.

The data was culled from PowerSchool by the principal as part of that school’s Equity project. It was shared with the school teachers in a staff meeting and with the Juneau community through the local newspaper.  Data was also collected the following year for comparison’s sake (here). Because these classes are quite small, one student – especially from a marginalized group – can make a big difference in the data in the representation. For example, there’s not enough data to conclude that there’s a trend for black students to be more likely to take AP Statistics. Cumulatively, however, across classes, across marginalized groups, and over the two years, the pattern of underrepresentation is clear.) These data are certainly not unique to JDHS:Yadaa.at Kalé or to Juneau; we hear about similar patterns of underrepresentation of marginalized groups in certain classes throughout the US. 

Race, ethnicity and gender labels were taken from PowerSchool, which means that families chose those categories from the choices available.  Families could choose only one race category (Asian, Black, Multi-Ethnic, Alaska Native/American Indian, Hispanic or White).  Free and Reduced lunch data is also entered into PowerSchool, via the collection of paperwork required for Free and Reduced Lunch. Other demographic tags regarding special programs (English Learners, Extended Learning and Special Education) were not included, in part to allow for specific analysis regarding Race, Ethnicity, Gender and Socioeconomic patterns (if there were any). 

We want to be clear that these numbers do not reflect the inherent ability or potential of any individual students or of any group(s) of students. We know that it can be difficult, especially for the students from these underrepresented groups, to see this data and that there may be a range of emotions: sadness, anger, bravado, disbelief, etc.  We think that it’s important to share this data, talk about it, think about what factors have contributed to it, and work to make sure that these inequities do not continue into the future.

A very real danger in collecting and presenting data this way is that it might be incorrectly interpreted to say that, for instance, girls don’t enroll in Engineering because they’re not “good at” and “can’t do” Engineering.  That is simply untrue; both societal messages and actual practices and policies have created these disparities.  Another, contrasting, challenge is that students in the underrepresented groups may feel that the message of this data or this article is that they, as individuals, “have to” go into STEM fields – even if they don’t want to. That is also not the intention of this analysis.  Our point is that all students deserve the same opportunities to explore and learn so that they can make their own, independent, choices and we, as adults, have a tremendous responsibility to constantly be analyzing those opportunities and surrounding context to make them more and more equitable.  

What questions are you left with about the school data around you?

What other student/school data do you think students (and families) should be interacting with?

What other data could be gathered to understand and then address these unequal representations?

Slideshow and “reveal” analysis developed and written by Brenda Taylor, Juneau STEM Coalition, with some advising by Paula Casperson, JDHS: Yadaa.at Kale principal.

Additional Resources:

Visualization Type: Table

Data Source:

Visualization Source:

Eastern Bering Sea Ecosystem

Slow Reveal
Notice, Wonder, Connect

Headlines suggested by students: “Ecosystem Changes in Alaska,” “The Arrows of Climate Change,” “The Ups and Downs of Alaska’s Animals,” “The Alaskan Marine Life Population Change,” and “The Population of Alaska’s Animals.”

This graphic shows how different aspects of the ecosystem in the Eastern Bering Sea –  from temperature to salinity to birds to fish – fared in 2022, relative to ongoing trends (not fashion trends!) Environmental conditions, like ocean temperature, affect plants and animals in different ways. The Bering Sea has been warming and this graphic shows how the impact of such climate change can result in “winners” and “losers”.  In addition to a long-term warming trend, the Bering Sea recently experienced a “pulse event” of a near complete lack of sea ice during the winters of 2017/18 and 2018/19. In 2022, there were some clear ecosystem responses to these environmental changes, such as increases in pollock and herring and decreases in several crab stocks and multiple salmon runs in Western Alaska. This graphic (and accompanying report) is created annually by NOAA Fisheries (part of the Department of Commerce) in order to provide a contextual summary of what’s happening in the ecosystem so fisheries managers can then make decisions about how many fish or crab can sustainably be harvested.

The graphic was designed to encourage “big picture” understanding – quite literally.  There are no numbers, only directional arrows, and no words, only icons. (There is, of course, corresponding text in the report that describes the arrows and icons in ever increasing detail.)  What are the advantages and disadvantages of excluding numbers and words?  

NOAA Fisheries creates graphics in this style for each of its Large Marine Ecosystems (e.g., Eastern Bering Sea, Aleutian Islands, Gulf of Alaska) and has been doing so since 2018. They have produced longer text reports since the 1990s.  (The graphic and summary report are not prepared for the Arctic Region because there are no commercial fisheries there.) This graphic is from the “2022 Eastern Bering Sea Ecosystem Status Report: In Brief” produced by Elizabeth Siddon (based in Juneau) with the Alaska Fisheries Science Center, NOAA Fisheries.  The complete In Brief is available here. The concept of these graphics was developed by Elizabeth Siddon and the other two leads for the Ecosystem Reports, all of whom worked very closely with the NOAA Communications Program.  Since then, the report style has been imitated by other NOAA centers in other parts of the country.  The reception of the 4-page In Brief(s) has always been very positive, in part because, prior to the Briefs, the only option was to read a 200+ page report.  These In Brief documents are the only ones printed in color for distribution by NOAA, e.g., at fisheries management meetings, or for mailing to rural communities where bandwidth might prevent people from being able to download the reports. 

NOAA’s goal in these graphics is to provide an accurate, but general summary of how things are changing in the ecosystem, and not to overwhelm readers with too much information. If people are interested in more details, they can read the full Ecosystem Status Report (here).  Creating one graphic that summarizes 200 pages of dense data is a complex, collaborative and nuanced process.  Among the many considerations that the authors make concerning the data are:

  • Deciding which pieces of data from the full Report get an icon and make it into this graphic is complex.  There is no set formula for that process.
  • Similar icons are used across the management areas (i.e., you’ll see thermometer icons in all regions, but the trend arrow is based on data collected within each region).
  • All arrows are the same size. This is because each icon and arrow is based on a unique dataset; the report authors don’t compare across datasets (that would be like comparing apples to oranges) to be able to indicate whether one increase is larger than another.  In general, arrows refer to long-term trends, not short-term, temporary changes.
  • Determining trends is difficult and not necessarily consistent from one graph to another. Similarly, quantifying “typical” is essential, and yet not clearcut.  Sometimes a line of regression is used. Sometimes it’s a +/- 1 Standard Deviation. Differentiating year-to-year variability from longer-term trends is also important. (See the examples below about surface air temperature and salinity.)
  • Up arrows indicate an increase.  Sometimes that increase is a good thing for the ecosystem as a whole; sometimes it is not.
  • Some icons are specific to a place and are represented with a pin (e.g., auklets) and other icons represent data collected across a wide area.  Every attempt is made to place icons as close to the geographical place of significance, but, occasionally, certain icons are placed in less relevant spots simply because there was space available on the graphic.
  • This graphic is based on the most recent data available, which generally means data from 2022, but in some cases means data from 2021.  Real-time (2022) information is much preferred by the fisheries managers. When the author started the Eastern Bering Sea Report in 2016, about 50% of contributions were based on the previous year’s data and about 50% were based on current year data. Since then, more and more contributors are working as hard as they can to turn data around FAST after summer field surveys in order to provide it to the reports. As a result, in 2022, ~95% of the Bering Sea report was 2022 data!
  • Fish are labeled by the year they were born.  So, for instance, the 2017 year class of pollock were age-0 in 2017 and were age-5 in 2022. 
  • The circled icons are multi-species groupings (e.g., the chlorophyll circle is a measure of phytoplankton, of which there are a bunch of different species). The copepod circle includes multiple species of copepods grouped together. The salmon circle includes Chinook, chum, and coho salmon.
  • Slide 3 asks the question of HOW you develop a synthetic “story” from disparate data pieces. How do you connect the dots between sea ice and sea birds? And what do sea birds tell us about the health of the ecosystem for the fish and crab stocks that NOAA manages? Each year, the report editors strive to take dozens of individual data and create a “story” about what is happening in the ecosystem that people can understand. But the final “story” each year doesn’t necessarily include ALL the data pieces. Some pieces don’t “fit” the story;  how do the editors determine that “fit”? 

This graph depicts the year-to-year variability (spikes) of Surface Air Temperature (SAT) Anomalies at St. Paul Island (Pribilof Islands) overlaid on the long-term trend (increasing line). From it, one can see that, while in the short-term, 2022 cooled and the yearly average of temperatures was “normal”, over the last 40 years, there has been a steady increase of temperature of 0.5C/decade.  

TERMS frequently used:

Bering Sea Shelf – the Continental shelf extends under the eastern half of the Bering Sea. (The lighter blue in slide 3).

Shelf break – where the Continental Shelf ends and the deep sea begins. The depth increases quickly from 200m to >1000m. 

Anomaly/anomalies – the deviation or difference from past “normal”.  In a graph, typically, “normal” is 0 and anomalies are above or below.

Trend – the general direction over time

ICONS and ARROWS

The author chose icons for the WGOITAG slides that were representative of different aspects of the physical environment and food chain as well as those that represented some of the different data considerations noted above.  Icons are listed in order from the physical environment (temperature, sea ice, salinity), to primary production (chlorophyll, coccolithophores), to secondary production (zooplankton), forage fish (herring), groundfish (pollock), salmon, crabs, and seabirds.

Physical Environment

Temperature
Content: The extended warm phase (2014-2021) is largely over.  Temperatures have returned to pre-warm phase averages.
Data Consideration(s): The “warm phase” is clear to see in the accompanying graphs. It’s not so clear how to define the beginning and end of that warm phase, though. The horizontal dashed lines are +/- 1 Standard Deviation.  Specifically, is 2014 warm enough to be considered warm? 2014 is within 1 SD of average in the Northern Bering Sea, but above 1 SD in the Southeastern Bering Sea.  The author of this report, with a great deal of collaboration with other experts and oceanographers, decided yes; someone else might have decided no.

The y-axis here is surprising – 500?! The y-axis is the cumulative annual SST (sum of daily temperatures) anomalies – so “0” is the long-term average. In the graph, the scientists chose to show the cumulative SST because cumulative warming may represent important conditions for the ecology of the systems as total thermal exposure for organisms was higher than historical conditions.  For example, for a juvenile fish, it’s not just that it was warm for 1 day (maybe they could deal with that), but that it was warm day after day after day (cumulative) which is more stressful and harder to tolerate.

Sea Ice
Content: Sea ice is by far one of the most important drivers of the ecosystem in the Bering Sea (and unique to the Bering; there is no sea ice in the Aleutians or Gulf of Alaska). Sea-ice extent remained above-average for most of winter 2021-22.  However, the ice was thinner almost everywhere than the previous winter and, as a result, melted more quickly in the spring of 2022.
Data Consideration(s): Because of the complexity of the content, the full Report has a variety of different data graphs to understand how sea ice is changing over time. This is an example of where the icon arrow cannot reflect both +/- simultaneously and the scientists have to make decisions.  Here, the up arrow reflects the increase in areal extent of sea ice, but does not capture the reduction in ice thickness.  Researchers in the Bering Sea have a better understanding of the impact of ice extent and what it means for the ecosystem. For that reason, the author  chose to go with the “up” arrow. Ice thickness is a newer metric of sea ice and there is less known about what changes in thickness “mean” to the system.

Salinity
Content: Salinity has been increasing steadily since 2014 (perhaps as the result of loss of sea ice), which corresponds with the warm phase from 2014-2021.  In 2022, though, the salinity decreased.  
Data Considerations(s): The purpose of this report is to report on trends.  For that reason, while there were a few data points in 2022 that showed decreased salinity, the authors of the report chose an up arrow because the longer-term increasing trend was thought to have more of an impact on the overall ecosystem. They noted the one year of lower temperatures in the text.  

Primary Production

Chlorophyll-a
Content: Chlorophyll-a has been decreasing since 2014. This may have serious consequences for the rest of the ecosystem because it’s the base of the food chain.  
Data Consideration(s): This data is from satellites. There are several advantages of satellite data, including high spatial and temporal coverage. However, these products are also limited to measurements within the surface layer of the ocean and also have missing data due to ice and cloud cover. Chlorophyll-a biomass does not directly provide information of primary productivity. Biomass is a balance between production and losses, therefore lower biomass levels could mean less production, or they could mean more of the production was eaten by grazers or sunk deeper in the water column than the satellite can “see”.

Coccolithophores
Content: The coccolithophore bloom was among the highest ever observed.  However, because the bloom turns the water a milky aquamarine color, it can make it more difficult for some species to see and, therefore, successfully forage for food.
Data Consideration(s): Here is an example of when “up” is not thought to be a “good” thing. Coccolithophores may be a less desirable food source for microzooplankton and they cause a milky aquamarine color in the water during a coccolithophore bloom that can reduce foraging success for visual predators, such as surface-feeding seabirds and fish.

Secondary Production

Euphausiids
Content: Euphausiids increased in number in both the northern and southern areas.
Data Consideration(s): This is a situation where sampling bias needs to be accounted for.  These data are from a sampling net called a bongo net. It’s fairly well known that larger euphausiids can actually swim and escape the bongo net, so these data are generally used to look at relative trends, but not absolute abundance values. That said, this year there was a separate euphausiid index that was derived from acoustics, and that also showed an increase. And, there was evidence that plankton-eating seabirds did well this year, so all of those factors suggest euphausiids were abundant.  (Also, look at speaker notes for euphausiids slide to learn how NOAA is juggling using real time data with data that takes 2-3 years to process.)

Forage Fish

Togiak herring
Content: In 2021, the herring in the specific area of Togiak (which was the 2017 year class) was significantly greater in number than previous years.
Data Consideration(s): This is a place-based example specific to herring that spawn near Togiak.

Groundfish

Pollock
Content: Age-4 pollock in 2022 (the 2018 year class) was well above average.  Various warm and cool temperatures at crucial times in their early life cycle, as well as abundant euphausiids in 2018 and reduced predation in 2019-21, coalesced to increase their survival rate.
Data Consideration(s): Pollock is the biggest commercial fishery in the US (by weight), so there is a lot of research and work on pollock. Both fisheries managers and the fishing industry are interested in recruitment (how many young fish will survive to the age/size that can be caught in the fishery).  The Temperature Change Index used in the icon slide is an example of one of the considerations involved in that estimation. Juvenile abundance does not always align with adult abundance a few years later. Juvenile fish go through lots of ‘bottlenecks’, so it can be difficult to predict future adults based only on the number of juveniles.  The In Brief text explains some of those potential bottlenecks.

Adult Sockeye Salmon
Content: Sockeye salmon runs continued to increase – in record numbers –  in Bristol Bay.
Data considerations:  This data is from a well-established sampling program.  What is fascinating is that these sockeye are doing SO well while so many other salmon runs are doing poorly; why is that? (i.e., we know it’s not a problem with the data!).

Crabs

Snow Crab and Bristol Red King Crab
Content: Because of the unprecedented warm phase of 2014-2021, and the accompanying near-absence of sea ice, crab stocks have shifted northwestward and decreased.  
Data Consideration(s): These icons are not place-based (if anything they might be placed further north to show their northward migration.)  This is an example of placing the icons more generally…and frankly, where there was space after placing all icons that needed to be linked with a specific place.

Seabirds

Auklets (St. Lawrence Island)

Content: Zooplankton-eating seabirds, like auklets, increased both further south in the Pribilof Islands and further north on St. Lawrence Island.
Data considerations: The auklets are place-based (with their own place marker) because the censuses are from discrete breeding colonies on these islands, BUT the birds are foraging over much larger areas and the researchers use them as indicators of prey available for the juvenile fish. How would you show this data? Place-based (because that’s what the data are) or general (because that’s how their data is used)?

Kittiwakes (St. Lawrence Island)
Content: Kittiwakes and other fish-eating seabirds did well at the Pribilof Islands to the south, but there were more reproductive failures on St. Lawrence Island, in the northern part of the Bering Sea.  That pattern is consistent with the greater availability of forage fish in the south, and lesser in the north.
Data considerations: The graph used as supporting evidence is itself distinctive – a separate example of using icons (this time with varying degrees of smiley faces in eggs, instead of arrows).

Additional Resources:

  • Full Eastern Bering Sea Ecosystem Report
    Citation: Siddon, E. 2022. Ecosystem Status Report 2022: Eastern Bering Sea, Stock Assessment and Fishery Evaluation Report, North Pacific Fishery Management Council, 1007 West 3rd Ave., Suite 400, Anchorage, Alaska 99501 Contact: elizabeth.siddon@noaa.gov
    Links to other full Alaska Ecosystem Reports are available here.
    Links to all (across the US) Ecosystem Status Reports are available here.
    A real-time marine heatwave tracker for the Bering Sea, Aleutian Islands, and Gulf of Alaska available here.

Text by Elizabeth Siddon, NOAA (elizabeth.siddon@noaa.gov) and Brenda Taylor, Juneau STEM Coalition.

Visualization Type: Infograph

Data Source: Full Eastern Bering Sea Ecosystem Report

Visualization Source: Full Eastern Bering Sea Ecosystem Report

Ranked Choice Voting

Slow Reveal
Notice, Wonder, Connect

This graphic shows how the ranked choice voting tabulation process worked in the November 2022 Alaska race for US House of Representatives. In the House election, Mary Peltola (Democrat) started off with the most votes, but she had fewer votes than the two Republicans (Nick Begich and Sarah Palin) put together. However, enough of Nick Begich’s voters wanted Mary Peltola over Sarah Palin, that when he was eliminated, Peltola won. Meanwhile, the Senate saw a different situation. Lisa Murkowski and Kelly Tshibaka are both Republicans and far and away had more votes than the Democrat (or the third Republican). Neither had over 50% though. After two rounds of elimination, Lisa Murkowski won. In both of these races, the candidate who had the most votes in the first round ended up winning. The difference is that in the House race, the Republican candidates combined had more votes, but enough wanted the Democrat over the other Republican, that it changed the outcome. Meanwhile in the Senate, the Democrat had many fewer votes but when she was eliminated her votes decided which of the Republican candidates would win. In both these scenarios, ranked choice voting provided more opportunity for voters to express their preference for candidates.

Beginning in the summer of 2022, Alaska switched to a ranked choice voting system. In ranked choice voting, voters rank candidates. There are multiple types of ranked choice voting, but Alaska uses instant-runoff voting. In round one of the counting, each candidate starts with however many voters ranked them #1. If no candidate has received more than 50% of the vote, then the candidate with the fewest votes is eliminated and all their votes are distributed to each voter’s second pick. This process is repeated until one candidate has over 50% of the vote. In some cases, there is no elimination process because one candidate wins in the first round. In November, 2022, for instance, Dunleavy received over 50% of the round one votes in the Alaska governor’s race so he immediately won. However, the statewide races for U.S. Senate and House both required two rounds of elimination to reach a candidate. 

There are many decisions made when designing a voting system. For example, Alaska starts with a primary. In that primary, voters only select one candidate. The top four candidates advance to the general election where instant-runoff voting occurs. All of these decisions impact the outcome of the election. Instant-runoff voting encourages a candidate to have broad appeal even if their supporters are not very enthusiastic about them. A candidate needs to get at least 50% of people to have voted for them, but they could have been a voter’s second or third choice. Meanwhile in a first-past-the-post system like most of the United States uses, a candidate needs to have the most supporters who feel strongly enough about them to pick them over everyone else even if that is still a minority of the voters in the area. Alaska tries to balance these tradeoffs by having the four general election candidates be picked by first-past-the-post and the winner be selected by ranked choice.

Additional Resources:

Visualization Type: Sankey Diagram

Data Source: Alaska Division of Elections

Visualization Source: Craig Fox using SankeyMatic

This graphic can be replicated with different data with some effort and minimal technical skill. Election data must be retrieved from Alaska’s Division of Elections and manually entered into SankeyMatic. The text entered to create these visualizations can be found below

Triple Dip – Winter Forecasting

Slow Reveal
Notice, Wonder, Connect

“La Nina Triple Dip”  Seasonal Forecasts in Alaska, Winter 2022-23

Predicted SST anomalies (˚C) for January-March 2023 from the National Multi-Model Ensemble (NMME) of coupled atmosphere-ocean climate models. 

These two graphs show some of how and what NOAA predicts for 2022-2023 winter seasonal outlooks (climate) for precipitation and temperature. The first graph (of projected sea surface temperatures) are a major factor in generating the predictions for the second graph (of predicted temperatures on land.) Overall, the second graphs shows that it is more likely that it’ll be warmer than normal in Northern Alaska and the Aleutian chain and that it’s more likely that it’ll be colder than normal in South Central and Southeastern Alaska. Note, this does not mean this is what will happen for sure; these are predictions of likelihood.  Remember that:  “Climate is what you predict, weather is what actually happens.”  (National Centers for Environmental Information)

Nicholas Bond, a NOAA scientist, explains the creation of the first graph through the National Multi-Model Ensemble (NMME).  He writes, “an ensemble approach incorporating different models is particularly appropriate for seasonal and longer-term simulations; the NMME represents the average of eight climate models.  The uncertainties and errors in the predictions from any single climate model can be substantial.  More detail on the NMME, and projections of other variables, are available at the following website: http://www.cpc.ncep.noaa.gov/products/NMME/.”

According to National Weather Service climate researcher Brian Brettschneider, there are three main factors that scientists use in making long-term seasonal forecasts for Alaska:  

1) whether it’s a La Nina or El Nino, (or neutral) year 

2) the evolution of sea ice in the Chukchi and Bering Seas, and

3) long-term trends

La Nina

As the climate.gov staff explain, “El Niño and La Niña are opposite phases of a natural climate pattern across the tropical Pacific Ocean.”  El Nino is the warm phase, La Nina is the cool phase and there is a neutral phase as well in between the two when the temperatures and winds are closer to (long-term) averages.  Generally, each phase lasts about a year, though it’s not uncommon for La Nina to last 2 years (and, once, 33 months.)  During La Nina, in the tropical Pacific, surface winds are stronger and temperatures are cooler than average.  La Nina (and El Nino) years are often factors in more extreme weather conditions in the rest of the Pacific area.

It’s this pattern (yes, in the tropics!) that has the greatest impact on what kind of winter we’ll have here in Alaska.  What causes this climate pattern swing is not yet well understood, but the impacts of the swing around the earth are significant and are closely monitored and analyzed. This year, unusually, we are in a third year of a row of La Nina,  (For more information, look here.)  

Brettschneider describes the most likely effect of La Nina in Alaska:  “More times than not, La Niña winters are colder than average in Alaska. Not every time, but a majority of times. So that represents about 40% of the variability [in seasonal predictions].”  (APR)

Graph of seasonal ONI values

Seasonal (3-month) sea surface temperatures in the central tropical Pacific Ocean compared to the 1981-2010 average. Warming or cooling of at least 0.5˚Celsius above or below average near the International Dateline is one of the criteria used to monitor the El Niño-La Niña climate pattern. NOAA Climate.gov image, based on data from the Climate Prediction Center.  The Oceanic Niño Index (ONI) is NOAA’s primary indicator for monitoring the ocean part of the seasonal climate pattern called the El Niño-Southern Oscillation, or “ENSO” for short. (The atmospheric part is monitored with the Southern Oscillation Index.) (climate.gov)

Sea ice in the Chukchi Sea and the Bering Sea

Brettschneider points out that, “[The sea ice is] kind of getting a late start right now. And when that water is open with no ice on it, there’s a lot of heat that can be liberated into the atmosphere, and it keeps things warm.” (APR)

Trend

Brettschneider reminds us of what we all know: “The trend is warming. You know, if you just woke up from a coma, and someone said, ‘What do you think it’ll be? What do we think the winter will be like?’ You should probably say, ‘I don’t know, but it’s probably going to be warmer than winter used to be,’ just because things are warmer now.” (APR)

There are a variety of other factors that impact the actual (daily) weather we have that NOAA can’t use in seasonal predictions because they happen on short timescales.  Those short term factors are things like where the jet stream will be or what the trajectory of sea ice will be.

Why do these seasonal forecasts matter? As NOAA staff write, “Seasonal outlooks help communities prepare for what is likely to come in the months ahead and minimize weather’s impacts on lives and livelihoods. Resources such as drought.gov and climate.gov provide comprehensive tools to better understand and plan for climate-driven hazards. Empowering people with actionable forecasts, seasonal predictions and winter weather safety tips is key to NOAA’s effort to build a more Weather– and Climate-Ready Nation.”  (NOAA)

NOAA’s Climate Prediction Center updates the three-month outlook around the 15th of each month.  For the latest predictions, go to https://www.cpc.ncep.noaa.gov/products/predictions/long_range/

Additional Resources:

Visualization Type: Heatmap

Data Source:

Visualization Source:

Search Trends

Slow Reveal
Notice, Wonder, Connect

Suggested Student Headlines: “Iditarod Search Trends in Alaska, What Do They Tell Us?”, “How Spikes Affect Us Most,” “The Trends of Your Home State Compared to the Rest of the United States,” “The Downfall of the Iditarod,” “Iditarod: Interesting Here Unknown There,” “How Popular is Sled Dog Racing?” “National Park Searches; Are They Popular or Not?”

This series of graphs explores trends in internet searches. Google Trends shows search requests made on Google and their relative popularity. We’ve included some of our favorite trends. It’s very easy to make your own that are best suited to your class’s specific interests; see directions below.

On each graph, the popularity of something is divided by the total number of searches in that area and time period and is scaled so the highest point is 100 and everything else is below that.  Google does some other filtering as well. Topics with very few searches are treated as zero. Someone who searches the same topic a bunch of times in a short time frame is treated as one search. 

Remember that while the data is scaled, it is still biased toward areas with lots of people within that region. So when looking at Alaska, results from Anchorage, Juneau, and Fairbanks will heavily shape the total number. You can see this with the phrase “snow day” which corresponds closely with when Anchorage School District experienced heavy snow. Remember that results show anyone searching from Alaska – tourists, workers, residents, etc.  The dates that Google puts on the x axis are not very helpful as there are not enough or consistent enough tick marks to easily figure out the dates associated with peaks and valleys.  When you’re looking at the graph “live” it’s much easier; then, you can click on any part of the line and it’ll say which week that data is from.

“The latest data shows that Google processes over 99,000 searches every single second. This makes more than 8.5 billion searches a day. (Internet Live Stats, 2022)….As of January 2022, Google holds 91.9 percent of the market share (GS Statcounter, 2022). […In contrast,] Bing has 2.88 percent of the market share, Yahoo! has 1.51 percent of the total market share. “ (Oberlo)  That sheer volume of searches certainly shows that Google Trends is good for noticing patterns in the popularity of a topic, but remember that it’s certainly not the only or most comprehensive indicator of trends.  Don’t forget to think about who is and who is not using the internet. And, what about people who use search engines other than Google? 

Who might use this google trend feature? Why and how?

  • Writers (bloggers) choosing what to write about and when to post so as to get the most clicks.  (If you want to write about fishing, when should you post?)
  • Businesses choosing when/when what to sell/how to advertise
  • Reporters – what needs investigating?
  • Ad writers – Developing an ad to match what people are interested in.

“Organic searches” and “organic traffic” are what shows up that’s independent of ads.  Entire departments and companies are dedicated to advising other companies about “search engine optimization” (SEO). Their goal is to help your product/ad/story appear in the first page of google searches. 

“Just like many other searches, Google is also a starting point for almost half of the product searches. 46 percent of product searches begin on Google (Jumpshot, 2018). With the latest data, Amazon surpasses Google when it comes to product searches, with 54 percent of searches starting on Amazon. The Jumpshot report shows us that Amazon and Google have been switching places from 2015 to 2018 in terms of being the preferred platform for users starting their product search.”  (Oberlo)

“How does Google Trends differ from Autocomplete?

Autocomplete is a feature within Google Search designed to make it faster to complete searches that you’re beginning to type. The predictions come from real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches.

Unlike Google Trends, Autocomplete is subject to Google’s removal policies as well as algorithmic filtering designed to try to catch policy-violating predictions and not show them. Because of this, Autocomplete should not be taken as always reflecting the most popular search terms related to a topic.

Google Trends data reflects searches people make on Google every day, but it can also reflect irregular search activity, such as automated searches or queries that may be associated with attempts to spam our search results.”   (google support.google.com

What trends can you find? Go to: http://trends.google.com . When you type in a word or phrase, Google will allow you to look at the results for a “search term” or a “topic.” “Search term” is the default, so make a point to click on “topic” instead.  “Topic” will give more complete results, including things like abbreviations, acronyms, the word in foreign languages, and other things that mean the same as your word or phrase.  

Can you challenge yourself to find at least one trend where Alaska searches are?

  • higher than the US as a whole
  • lower than the US as a whole
  • very similar

Visualization Type: Line Graph

Data Source: Google Trends

Visualization Source: Google Trends

It can be easily replicated. Go to Google Trends and search for a term. Then you can filter by time and region or add another search for comparison.