I have two identical models of a 2-story building. The difference is, in the right one I have run “Solve Adjacency”. Why does the sensor grid that I assigned to the room in the right building seem strange as if the grids are not distributed correctly?
I imagine this is because we don’t simplify/rebuild the base face before meshing in the Rhino plugin. @mingbo, can you confirm and possibly improve this?
That looks correct to me and match the setbacks in your building. The parts that touch the first floor are set to surface and the parts that are not are set as outdoors. Which part do you expect to be different?
Thanks Mostapha. Yes you are right. The boundary condition makes sense. I just posted that to show you the boundary condition, and see if it affects the sensor grid question I had in some way.
The sensor grid is generated based on the input geometry, in this case, it is the floor. Before the solve adjacency, the floor was the whole single-piece geometry, but it was split into small pieces in order to match the adjacent surfaces. That is why the sensor grid is generated differently before and after solving adjacency.
I can see your point but at the same time, I think @monahmi’s expectation makes sense. Since we use the same model for both energy and daylight simulation there are situations like this where one can affect the other one but from daylighting perspective you want an evenly distributed grid for this room.
What if we add a MergeCoplanar option base faces before generating the sensors for a room? That way the user will have the choice to switch between the two cases.