Analysis and Guidance Regarding Teacher Evaluation Choices and Decisions
By Julie Cavanagh PS15k Chapter Leader
I have yet to meet a parent, teacher or student from a school community who tells me they believe the new teacher evaluation system being implemented in NYC is a good thing, for anyone. It seems most people understand this system is nothing more than another cog in the wheel of a machine with one clear purpose: the destruction of our public education system. This system and the accountability and testing measures and movement preceding it, reduce our students, our teachers and our schools to numbers and data, dehumanizing our schools and our profession.
There is a growing movement that says, “Don’t feed the beast! Deny the data!” My heart lies with this sentiment, but in terms of the teacher evaluation framework, it may not be the right one. Let me be clear, this system is irrevocably flawed, and the illusion of choice is no choice at all. But while the system is fundamentally flawed and hurts our schools and profession, we can choose to participate in order to mitigate the damage to individual teacher jobs as well as our schools and students.
MORE members and allies have received multiple requests for guidance and analysis concerning the decisions UFT members and local committees must make regarding the teacher evaluation system. Below I attempt to lay out, as I see them, the pros and cons of the choices individual teachers and school-based evaluation committees must make in the coming weeks. This is by no means complete and it would be immensely helpful if folks offer their additional comments, analysis, and suggestions in the comment section!
The Lay of the Land
There are basically three “paths” to journey on as you make decisions as an individual UFT member and as a committee:
- Non-Compliance: There is an argument to not comply or participate in this evaluation system at all. The consequences of this decision would be that embedded in the law are defaults for all decisions. So, your principal would choose your observation course, you would not submit and therefore would not have anything counted in terms of your artifacts for Danielson, and the principal could trigger the default for the committee which would mean state tests and growth model.
- Uber Compliance: Make decisions/choices that would result in the most work possible, particularly for the DOE and administration. This would flood the system with responsibilities and paperwork and highlight the sheer insanity of the evaluation system.
- The Middle Way: Make decisions and choices that will produce the best anticipated outcomes in terms of ratings for teachers and will mitigate the negative consequences of this system on children and staff.
Decisions, Decisions: Part 1 Observations and Danielson
Teachers have one individual choice to make at the beginning of the school year: what observation course will they choose and how will they related to and work with the Danielson Framework.
As far as observations go, there are two choices: at least six 15 minute or more informal observations that will include feedback, which can also be somewhat informal OR at least one formal observation with a pre and post conference with more formal feedback combined with at least three informal observations.
Choosing an observation course is an individual preference, but here are several factors to consider:
1. If you have a principal who is a “gotcha type” choosing the course with a formal observation may offer you more protection because there are added steps to ensure more meaningful feedback and support and therefore, more documentation. In addition, the formal observation would be an announced observation whereas the 15 minute informals would not have to be announced.
2. If you are an inexperienced teacher, you may benefit from choosing the course with a formal observation because you would benefit from (in an ideal world) your principal’s guidance, feedback, and one would hope, support.
3. If you worry about Danielson being applied to only “15 minutes of fame” via informal observations (the rubrics are complicated and what you don’t see as a principal, unless it comes from a conversation related to the observation or one of the eight artifacts teachers can submit by April, cannot be counted) then you may want to choose the formal observation course.
4. If you have been identified by your principal in the past as someone who needs more support, have been given a U or feel targeted by your principal, as mentioned in #1, the formal observation course may offer you more protection and hopefully, support.
5. If you are an experienced teacher with a good relationship with your principal and s/he is someone you trust, then choosing the informal observation course will save you both time while still offering an opportunity for feedback… and of course those 15 minutes of fame!
6. Finally, there is a political argument to choose the course with the formal observation because it will require more work on the part of administrators and highlights the overarching problems with creating such a complicated system without proper planning and resources. Within this same argument, there is also a provision in the UFT contract that provides for announced observations with pre and post conferences, so choosing this course maintains, at least some of the protections that exist in our current (all be it expired) contract.
In terms of how teachers should work with and within the Danielson Framework:
1. Teachers must choose to submit, or not, up to eight artifacts by April of 2014, to their principal (two can be submitted in the beginning of the year conference). These artifacts will support the principal’s rating on the Danielson Rubric. This is particularly important because, when examining the rubric, it will be very difficult to give folks effective and highly effective ratings in certain component areas without them. For example: the area of planning can be more effectively supported by showing a teacher developed curriculum map or weekly lesson plans. The flip side to this is that as educators, our labor has worth, and the time we are given in comparison to our output is valuable, matters, and is not currently supported both in terms of time and resources. We have been without a contract for 4+ years and the current contract does not compensate us for this additional labor, both in terms of the time to do it, nor monetarily. It must also be stated that creating artifacts, like elaborate lesson plans, are not necessarily a determination of good teaching and learning. Bottom-Line: you can refuse to submit the artifacts and stand on principle, after all, withholding our labor is the power that lies within our union. However- since we have a union leadership that has agreed to this system and believes it is a good thing, the power of withholding our labor isn’t so powerful and choosing to not submit artifacts most likely will hurt your overall rating for the observation portion of your evaluation.
2. There has been a lot of talk about how Danielson will force teachers to change how they use and develop curriculum, lesson plan, meet and plan with other teachers, take on additional responsibilities in our schools, and analyze data. Similarly, to what I state above, the issue here is principle vs practicality. In principle, we should not jump through the hoops of writing elaborate and scripted lessons, those things do not make good teaching and learning. We would love to take on additional responsibilities and meet and plan with our colleagues, but we have to be allotted the time and resources to do those things meaningfully so that our labor is properly compensated and so that time is not taken away from other things that are important for our students, our school communities, and for our own families. In practicality, if we do not have the evidence needed to plug numbers into the Danielson rubric that equal effective and highly effective, our jobs are at risk. Bottom-Line: If we don’t work outside of the contract in order to fulfill the requirements of the Danielson Framework, we risk a rating that falls below effective. Chapter Leaders should work with principals to carve out additional time to meet these requirements.
As chapter leader at my school, we have revamped our extended time and working lunch schedule (we have extended lunches and preps and have working lunches in lieu of after school staff and grade meetings) and process to try and provide increased time and individual and grade level team autonomy to help with the work required by the Danielson Framework. It is not enough, but as CL, I don’t feel I can encourage my colleagues to refuse the data, because they have families too feed and because the power in opting out comes from the parents of our students refusing to feed the testing beast, as well as our union harnessing the power of our collective action, not in teachers acting alone. While it is tempting for me to encourage my fellow teachers to refuse to participate in this damaging and destructive policy, we will stand together as we are led to the proverbial slaughter, and do our best to make sure the impact on our schoolhouse is as minimal as possible while we fight for change at the statehouse.
In full disclosure, we have a principal who is not a “gotcha type” and works hard to support and protect our teachers so our students have the best educational opportunity we can provide, because, she knows supporting and retaining teachers is an in-school factor PROVEN to positively impact student achievement, along with class size. There is NO research to suggest that evaluating teachers based on test scores and scripted pre-packaged rubrics benefits students in anyway.
I understand that many educators and their school communities have leaders whose interest lies in not supporting educators; in some cases harassing, forcing compliance, or demonizing them. For these teachers and schools, this process will be particularly demoralizing and challenging and I am truly sorry. It is my hope that the energy and anger that is born out of this process will be harnessed to fight for the changes we need so desperately in education and union policy.
Decisions, Decisions: Part 2 Evaluation Committees and MOSL (Measurement of Student Learning)
Evaluation committees have three decisions to make:
1. What tool will you use? State tests, NYC approved assessments (for example: F&P for elementary schools), or NYC benchmark assessments (basically Common Core tasks). Committees only have to choose one tool in either math or ELA not both.
2. Who will the target population be? Whole school, whole grade, individual class, bottom third of individual class
3. How will the test/assessment be measured? Growth model (Value Added Method with a peer group determined by the state) or Goal Setting (goals set at the committee level with feedback from the DOE)
What Tool?
Because all UFT members (with the exception of prek, PT, OT, APE and guidance counselors/social workers) will be evaluated based on a test/assessment regardless of whether one exists for their subject (think art, gym, various high school courses…) there is an argument for choosing the same tool for all UFT members in the building as an act of solidarity, to maintain fairness, and squelch any issues of competition or the consequence of making certain positions more or less desirable which would have a negative impact on our students.
I strongly agree with this sentiment and am advocating for this position. The other side of this argument is of course that by choosing one tool, greater emphasis is placed on that tool above all others (at least in theory) and applies immense pressure on the teachers who administer that tool and on the students who take it. This portion of the decision is a political one (How can we minimize the destructive nature of these choices on union rights, solidarity and protections while also protecting the legitimacy of our work in our schools?). It is also is a decision that must be made in consideration of your school climate (If you choose one tool, will that unite your teachers, lessen or eliminate potential conflict or could it create conflict?).
Finally, there is also the question of choosing a tool that would potentially be more beneficial to students. I do not find much validity in this final question. Teachers assess their students all the time, that is what good teachers do. We do not need external pressures, data collection systems, and evaluations tied to these assessments to do the good work we do. Assessments are not designed to evaluate teachers, they are designed to inform teachers of what students know or don’t know, to highlight areas of strength and weakness, and to guide our instruction. Data does have purpose, but that purpose is being perverted and we must be careful to avoid feeding into the narrative that distorts our profession and harms children.
State Tests: choose one of these if your students do well on them within the framework of the growth model and/or if your school community does not want to do additional grading and data input. The merit in this decision is protecting the value of our labor (it can’t be repeated enough: we should not be giving our labor away for free, especially when we have been without a contract for four years) and this ensures students will not be subjected to additional external tests/assessments. You probably do not want to choose this option if your students do not typically do well on these tests (think about your school report card grade as a loose guideline). In addition, there is a principled argument to avoid choosing state tests so we do not place even more value on these flawed, invalid and racially/class biased tests. Finally, because the 40% MOLS section basically equals 100%, you may want to diversify. Choosing state tests for the local 20% portion would result in the entire 40% being based on state tests.
NYC Approved Local Assessments: choose one of these if your students/school does not typically perform well on state tests, if you do not want to put an even greater emphasis on state tests, and/or if you find an approved assessment that your school community finds or would find useful. The downside to this choice is that teachers will not be able to grade their own assessments and teachers and/or schools in some capacity will have to enter the data from these assessments and thus far, no compensation in time or resources has been offered or explained so this will be in addition to the excessive data and paperwork responsibilities we already have. Teachers who administer these assessments will have to do so in addition to any state assessments and there will be a pre and a post (fall and spring). School based committees will also have a greater amount of work to do in making this choice because the MOSL tool requires data inserted for each teacher when not choosing state tests. Yes, it is a set up- noose or sword?
NYC Performance Benchmarks: Similar to the above, this choice provides an alternative to the state test as well as an alternative to a more formal assessment in general. These benchmarks are designed like tasks and would be administered in the fall and spring. The pros and the cons remain basically the same as choosing a local assessment.
Bottom Line: Choose a tool that will result in the best results for your staff, that will burden your students the least, and will require as minimal free labor as possible weighed against these decisions. Remember, the 40% MOSL portion will count more than the 60% observation portion so if your school performs poorly on state tests, it would be best to diversify and not choose state tests for the local 20%. I strongly encourage the choice of one tool for all teachers as an act of solidarity and a way to keep things as fair as possible in an unfair and flawed system.
Who is the Target Population?
This decision is mostly a statistical and strategic decision. What target population, given the tool, would result in the best outcomes? If you choose a state test for example, would you get better results by choosing the whole school or a certain grade? (*Note if you choose the same tool as the state does for their 20%, the target populations cannot be the same.) If you choose a NYC performance benchmark, would it make sense to choose a particular grade that could be allocated the minimum intervention resources that exist to administer them, grade them and input them so there is less of an impact on the whole school population? One ideological or political issue that does exist in this decision: I would advocate against choosing individual classes as a target population as this will potentially pit teachers against each other and feed into the deeply flawed push for merit pay. Bottom Line: Choose a target population that will result in the best mathematical outcome and avoid feeding into individualist decisions that sort and separate our teachers and children and potentially lead to a framework for merit pay. (*I should note that I *think* committees can choose any tool option and also choose any of the target populations, for example one grade. I have not fully explored the MOSL tool yet, so it is possible that while you may be able to choose one tool for all UFT members, you may have to diversify the target population. I believe however, that all choices can be the same for every member. I am working on greater clarification on this and will update with a final answer.)
How will you choose to measure?
The growth model is determined by the state. This model is highly flawed, was not designed to compare results across a district, and is based on preset assumptions about the growth a student should make from year to year in the case of state test or from pre to post in the case of a local assessment or task. For certain state tests, where there is no test the previous year, there will be a different benchmark test given in the fall or another existing test deemed comparable by the state would be used to measure growth. This sets up the possibility of measuring growth based on different assessments that were not necessarily designed to be compared to determine “growth”. Choosing a growth model requires no additional work from school staff. Bottom Line: Choose the growth model if this is a model that has not statistically been a problem for your school population, doing so will protect your labor and require less time that could be spent serving children. If your school based analysis shows that growth models do not result in good outcomes for your school population (for example your scores tend to stagnate or plateau , you have been consistently placed with inaccurate peer groups in the past, or students are high performers and may not show much growth because they enter at a high score already). You may also want to avoid the growth model if you have concerns about the invalid and unscientific nature of this model. It should be noted that the state and city have heavily pushed the growth model, which may be in and of itself a reason to not choose it.
Goal setting is done at the committee level. The DOE does send their recommendations for goals, but the committee decides with the principal having the final say to agree or trigger the default. (*Note: a principal may NEVER modify the will of the committee on ANY decision. If the principal does not agree/approve her/his only choice is to trigger the default which is state scores and growth model). Goal setting will require more work and data input. It offers more control at the school level, but committees must set goals carefully because with a goal, either you meet it or you don’t, there is only one way to “get points”. If the goal is met, full points will be given. If the goal is not met, there will be no points. Bottom Line: If your school historically does poorly with growth models, you may want to consider goal setting. If your committee feels there is a strong principled argument/stand to make/take in refusing growth, because value added is a deeply flawed measurement model for the purposes of evaluation, then you may also want to consider goal setting. If your school community feels it can set goals that are sure to be attainable, goal setting offers more control and autonomy.
I am sure this is all clear as mud! I encourage folks to use the comments section to continue the conversation, to ask questions, to offer suggestions, and share what they decide at their schools (I will certainly share our final decision when we have one, which insanely must be by September 9th!).
As you move forward weighing each decision against what will inflict the least pain and suffering upon your school community, some final thoughts in summary:
1. Make decisions based on outcomes: what will result in the best “scores” for your staff? (What tool? Who will the target population be? How will they be measured?)
2. Make decisions based on what will bring you together: do not allow these decisions to divide you. Stand in solidarity together, take care of each other, and do what benefits students and teachers collectively. (What tools/population/measurement model can you choose that will impact teaching and learning the least and can be applied evenly and fairly across subject and grade level positions?)
3. Decide to get involved: I am convinced the overwhelming majority of educators, after navigating this evaluation system, will be moved to action. Do not get discouraged; do not believe we cannot affect change. Whether you donate, sign a petition, attend a rally, come to a meeting, run for office, or join an organization– the time is now to stand up and fight the tidal wave of attacks on public education.
“These are suggestions only and does not represent the official positions of MORE or UFT. MORE strongly advises that you conduct your own research, attend DOE/UFT trainings , and consult with your staff in making these critical decisions”
WHEW! So much to digest! I know and realize that NYC Dept of Educ has not really been so swift to provide training to ALL their staff, especially to personnel-in-rotation, aka ATRs/ACRs. Bookmarked this so I can refer to it often! Thank you for the heads up! This will truly be a VERY interesting SY2013-2014!!!
Found out today at UFT CL training in BK that teachers in the ATR pool remain under S/U evals and are not subject to this eval system…. maybe others knew this before, but I did not!
Julie,
Thank you so much for making this accessible. I read the MOSL guide info from the DoE and just glazed over. It’s riddled with contradiction and needless, confusing detail, while leaving many unanswered questions.
John Elfrank-Dana
UFT Chapter Leader
Murry Bergtraum High School
Wow. This is amazing and I thank you much for the time spent on it. I think we are all trying to figure out the least harmful evaluation ways in terms of students and teachers. It’s a shame our union leadership can only provide minimal advice on this issue other than agreeing with the reformers that these evaluations are “fair”. Jeesh!
Two points I may add are…
The full-period planned observation must include all 22 Danielson points, whereas these 22 points can be accomplished through the six shorter(?) observations over the course of the year.
For high schools, I believe the PSAT’s in October will provide a somewhat low baseline assessment. I’m still not sure what final assessment in the spring will be matched up with this assessment.
P.S. I hate writing the word “assessment” so much.
At uft BK CL training today they said the formal observation does NOT have to include all 22 Danielson components… the 22 components are to be covered inclusively through ALL observations
I know what you mean. Generally “test” can be used, with no loss of — or shift in— meaning.
I cannot see another option but to overwhelm the administration with paperwork. I definitely noticed how the “snapshots” last year tapered off after the first two ambitious weeks. It will take days for the principal to write up feedback for all 22 points on the Danielson after my formal observation. Realistically how many teachers can be observed before he/she loses steam? Also, what would be the consequence of not giving me timely feedback? I’m sorry but I cannot help but think this whole idea has a very short and self-destructive lifespan. We all must go for the ride yet remember to stay strong!
I agree, overwhelm the administration and the entire system with paperwork, which of cause will ne lost in the shuffle, and who is going to input all of the data especially from a school with large populations!!!! The whole system is a time bomb ignited and ready to explode…. SOON…Good luck to all staff, teachers, administrators ( who give support) parents and most of all OUR CHILDREN!!!! May the force be with us in the 2013-2014 school year :-)
I don’t agree w. you re. administrative paperwork. A lot of the building admins *like* Danielson precisely *because* it lends itself to generic “cut and paste” analysis. Not only is the pedagogy interchangeable, but so is the critique. Time and trouble saved translates into more “down” time for them. From THEIR pov, what’s not to like?
On the committee at my school. Here are a few of the take aways: Discussion focused on which choices would provide the best opportunity for teachers to earn the most points. Which assessments would be most authentic was discussed but quickly tabled given that such assessments are time consuming . We chose to “play the game.”
Choosing “growth” model means teachers have little incentive to help the few 3′s and 4′s that we have as they offer the least potential for growth.
Gym, art and foreign lang. teachers will be measured by assessments unrelated to their field.
Special ed teacher expressed concern that some of the “lowest 1/3″ are lowest 1/3 for a reason. She has recommended some of those students for alternative assessment but it has been denied since the bar for who qualifies for altern. assmt. has been raised so high.
One of the most confusing and demoralizing exercises I have ever engaged in.
Thank you so much. You have captured the true essence of this near perfect,
nefarious, and diabolical plan. Few will survive.
Joel Garcia
CL ms50
Excellent analysis.
What a shame that Mulgrew deferred the evaluations to King.
And it’s ironic that Duncan has given 37 states “federal waivers from the most egregious mandates of the No Child Left Behind Act, an extra year to implement teacher evaluations linked to new assessments that are supposed to be aligned to the new Common Core State Standards. This means the states have until 2016.”
http://www.washingtonpost.com/blogs/answer-sheet/wp/2013/06/25/arne-duncan-tells-newspaper-editors-how-to-report-on-common-core/
some further clarification:
on this point: “(*I should note that I *think* committees can choose any tool option and also choose any of the target populations, for example one grade. I have not fully explored the MOSL tool yet, so it is possible that while you may be able to choose one tool for all UFT members, you may have to diversify the target population. I believe however, that all choices can be the same for every member. I am working on greater clarification on this and will update with a final answer.)”
—–In elementary and middle school (b/c of 4-8 state tests), you could choose the same thing for k-2 and out of classroom teachers, testing grades basically have no choice. so, the only way to have ALL members be evaluated the same is to trigger the default. But you could have lower grades the same, upper grades the same, and place out of classroom with one of those groups. Similarly, w/ HS, you would have to trigger default to make exactly the same, or as Arthur wrote on NYCeducator Blog, departmentalize (similar to elementary and middle level by grade) and have groups of members evaluated the same.
In addition, there were many portions that I was given inaccurate information about in my training w the DOE this summer and information from different UFT officials (in full disclosure I had to leave Amy’s excellent presentation at BK CL mtg early so it is possible she may have cleared some of this up for me, I don’ t know):
-I was told at July training, we did not have to choose ELA and Math. This is generally false. There are some very strange cases where you would only choose one, but generally, you would do both (particularly anything with tests and 4-8 would pretty much always be tests and both). Depending on the tool, K-2 could have just one.
- If you chose something like F&P in elementary school, you could not use it for instructional purposes, which is insane because that is what it is designed for. So choosing something that makes sense and you already used or would like to use as a school, doesn’t make sense at all, because then you can’t use it anymore!!!! Furthermore, for third grade, whatever you chose, could be used in the fall, but would be used as benchmark for state test in spring. So comparing F&P fall level with spring test score. Again, insane.
-Goal/target setting was completely misrepresented in the DOE training. The schedule to choose this option, is also insane. We would have to make choices and have all students assessed by Oct. 4th, enter all individual data by Oct 31st, Doe would then send their targets and then we could appeal their targets but no way to know if what we believe should be the targets would be the actual targets so it would be a big gamble to choose this option, and if 60% of the target isn’t met, there would be zero points. Or for example, if you had a class of 20 (Ha-Ha) then 15/20 would have to meet target. This is NOT target/goal setting. It is nonsense. And, the data entry alone makes it nearly impossible.
-UFT saying speech not included, DOE and person I spoke with Thursday insisting they are.
There is more I learned in the last few days, but not worth writing. My school committee decided to not participate any more than we already have in this rigged, fraudulent game. We agreed to allow the default to trigger- at least this way, all of our members will be evaluated the same and we can stop wasting our time trying to make sense out of nonsense.
-
I was told in our DOE training on August 29th that if ONE student did not meet their goal in the goal setting option that the teacher would fail (I assume they meant be considered ineffective in that measure).
What I have been told, on multiple occasions is that it is 60%. The number breakdown that was quoted as an example by a doe person to me was 15/20 students (which is funny bc what class has 20 kids and also that is not 60%). I wonder if we had 100 teachers in a room, would they all quote something different!?!
Julie has there been any talk about trade teachers aka vocational ed aka CTE teachers and their assessments? We have no state assessments and the certifications and licenses earned in these trade courses are either not fully recognized by NYSED (because some very few are) and do cost the schools big money, especially when we are talking all of thr students.. Any real word or talk about that? There are only two trade exam companies NYSED sort of adopts but again both cost the schools money… BUT they are only written exams NOT “hands-on” (practical) and in CTE “hands-on” is the equally important. “What good is a mechanic that has the license, but can’t remember how to put your lug nuts back on?” Thanks! :)
I have not heard anything about this, sorry!
I have a question about the formal observation choice. If I choose to have a formal observation, do I have to include ALL 22 Danielson components within that lesson? I was told today that the three informals that come with the formal will not count in enabling the teacher to cover any missed Danielson components. I know there are artifacts as well, and these can help you complete the Danielson rubric for the year. But I thought that it is too much pressure to have to do one amazing lesson and my score is basically based on that.
No. All 22 do not need to happen in one observation. The formal plus 3 (or more) informal option collectively leads to your rating under 22 points with the artifacts… That is combined. This is what Amy said at bk cl mtg and what we discussed in my school w doe “talent coach” and someone from our network.
Thank you.
What about Pre-K and ATR teachers then? I cannot get a straight answer about how their year will be? Are they still S/U? Do they still need to do the meetings with admin at the start of the year? Do they still need to choose 4 observations with a formal or 6 with no formal?
S/u for both. Uft has said no danielson or rest of it some doe folks have said yes so I don’t know for sure the complete answer…. Sense would be if you don’t fall under the system (state law) they (local doe) can’t apply parts of the system to you but of course very little of this makes sense so…… In my building these folks will be observed and rated as they have w s/u system though my principal will use the same tools she uses w everyone else to give feedback etc but they will not get a rating calculated through a danielson rubric, unless someone tells us it has to be otherwise and the uft agrees.
Any info from CL’s or committees on how District 75 teachers that teach standardized students will be evaluated on the 40% state and local measures. Will they be included? I hear its a mess and teachers are freaking out. Since in D75 students score lower than the rest of the city, and most students MAIN goals are set on an IEP that usually has many lower level goals and objectives.
I have not, but in this sick game Ieps do not matter. And if you are not taking AA then your teacher would follow the same allocations and options as her/his grade. The data if applied to a growth model however so in theory, children w varying degrees of special needs will be peer grouped, but who really knows how accurate they will be….
Julie,
I see that under the heading “What Tool” you mention that APE teachers do not fall under the new system. I am an APE teacher in D75 and have not heard anything about that being the case. Do you recall where you found that information so I can possibly find out more?
Thank You,
Stephanie
I was explicitly told this at my doe training in July and asked again for clarification w my network in August ( we have an ape teacher and wanted to make sure knew her rights).
Thank you for getting back so fast!
Julie, Thank you for your Work, your Thought, your Passion!
My AP told me today that my meeting is on Friday to decide what I want to choose, 4 or 6. She also mentioned that she will ask me for my goals for the year. She said she would send out an e-mail to me. I have not had the chance to speak with anyone yet, I will tomorrow. Do you have any advice, I have not heard anything about discussing my yearly goals in this conference.
Thank you! Peter
I am a parent of a child in second grade. Our school will be using DRA. What if I opt my child out of the MOSL? I am really considering this as an option.
I am a parent and I find this system deplorable. My son’s school has chosen the DRA for its local measure. What happens if I decided to opt him out of any of these assessments tied to teacher evaluations?
It’s amazing to pay a visit this web site and reading the views of
all colleagues on the topic of this article, while I am also keen of getting familiarity.